Skip to main content
Snowflake 🇺🇸 · 9 min read

How to Pass Snowflake SnowPro Core (COF-C03) in 2026: Complete Study Guide

Your complete guide to passing the Snowflake SnowPro Core (COF-C03) exam in 2026, covering all 6 domains, essential concepts, and a 5-week study plan for beginners and intermediate Snowflake users.

# How to Pass Snowflake SnowPro Core (COF-C03) in 2026: Complete Study Guide The Snowflake SnowPro Core certification is the foundation of the entire Snowflake certification path. Whether you're a data engineer, analytics engineer, BI developer, or data analyst working with Snowflake daily, this certification validates that you understand how Snowflake works — not just how to write SQL on top of it. COF-C03 is the current version of the exam (replacing the older COF-C02). If you're starting your Snowflake certification journey in 2026, this is where to begin. --- ## Exam Format at a Glance | Detail | Value | |---|---| | Exam code | COF-C03 | | Price | $375 USD | | Number of questions | ~100 (scored) | | Time limit | 115 minutes | | Passing score | 75% | | Prerequisite | None | | Delivery | Pearson VUE (online or test center) | | Question types | Multiple choice, multiple select | The 75% passing score means you can miss about 25 questions — but this exam is harder than it looks. Many candidates who use Snowflake daily are surprised by the depth of questions on editions, data loading internals, and governance features. --- ## Domain Breakdown | Domain | Name | Approximate Weight | |---|---|---| | 1 | Snowflake AI Data Cloud Features & Architecture | 20% | | 2 | Account Access and Security | 15% | | 3 | Performance Concepts | 15% | | 4 | Data Loading and Unloading | 15% | | 5 | Data Transformations | 20% | | 6 | Data Protection and Data Sharing | 15% | Domains 1 and 5 carry the most weight together. Spend proportional time on each, but don't neglect Domain 2 (security) — it's where many experienced Snowflake users are weakest because RBAC specifics rarely come up in day-to-day work. --- ## Domain 1: Snowflake AI Data Cloud Features & Architecture (20%) This domain covers Snowflake's three-layer architecture and what makes it fundamentally different from traditional databases. **Three-Layer Architecture**: | Layer | Name | Key Characteristics | |---|---|---| | Bottom | Cloud Storage | Compressed columnar micro-partitions in S3/Azure Blob/GCS. Persistent, separate billing. | | Middle | Query Processing (Compute) | Virtual warehouses — MPP clusters that read from storage. Independent sizing and scaling. | | Top | Cloud Services | Always-on: authentication, metadata, query optimization, security. Free under 10% of daily compute. | **Virtual Warehouses**: Independent compute clusters. Multiple warehouses can query the same data simultaneously without resource contention. Each warehouse has its own local disk cache. **Micro-partitions**: Immutable compressed files of 50–500MB. Each micro-partition stores metadata (min/max per column), enabling partition pruning without full scans. **Snowflake editions**: Standard, Enterprise, Business Critical, Virtual Private Snowflake. Know which features each unlocks. 💡 **Exam Tip**: Snowflake's architecture means compute and storage scale independently. You can query 100TB of data with an X-Small warehouse — it will be slow, but it works. There's no requirement to match warehouse size to data size (though larger warehouses are faster). --- ## Domain 2: Account Access and Security (15%) **RBAC in Snowflake**: Role-Based Access Control is the foundation of all access management. Every operation requires that the active role has the appropriate privilege on the target object. **System Roles Hierarchy** (from highest to lowest): ``` ORGADMIN → Manages the Snowflake organization (multi-account) ACCOUNTADMIN → Full control over the entire account SYSADMIN → Manages objects (databases, warehouses, schemas, tables) SECURITYADMIN → Manages roles and users USERADMIN → Creates and manages users (subset of SECURITYADMIN) PUBLIC → Default role; every user inherits it ``` **Key principle**: ACCOUNTADMIN should NOT be used for day-to-day operations. Create custom roles and use the principle of least privilege. **Row Access Policies**: Attach to a table to filter rows based on the querying user's context: ```sql CREATE ROW ACCESS POLICY sales_region_policy AS (region_col VARCHAR) RETURNS BOOLEAN -> CURRENT_ROLE() = 'ADMIN' OR EXISTS ( SELECT 1 FROM region_access_map WHERE username = CURRENT_USER() AND region = region_col ); ALTER TABLE sales ADD ROW ACCESS POLICY sales_region_policy ON (region); ``` **Dynamic Data Masking**: Conditionally mask column values based on role: ```sql CREATE MASKING POLICY mask_ssn AS (val STRING) RETURNS STRING -> CASE WHEN CURRENT_ROLE() IN ('HR_ADMIN', 'PAYROLL') THEN val ELSE '***-**-' || RIGHT(val, 4) END; ALTER TABLE employees MODIFY COLUMN ssn SET MASKING POLICY mask_ssn; ``` --- ## Domain 3: Performance Concepts (15%) **Result Cache**: Query results cached for 24 hours in the Cloud Services layer. Reused if the query text is identical and the underlying data hasn't changed. No compute cost for cache hits. **Warehouse Cache**: Data files read by a warehouse are cached on local SSD. Subsequent queries that access the same data blocks read from local disk instead of cloud storage. This cache is cleared when the warehouse suspends. **Metadata Cache**: Table statistics and partition metadata stored in Cloud Services. Enables queries like `COUNT(*)` on large tables to return instantly without warehouse compute. **Micro-Partition Pruning**: When your WHERE clause includes a column that's naturally sorted in the data (e.g., ORDER_DATE for time-series data), Snowflake skips micro-partitions whose min/max range excludes your filter value. **Clustering Keys**: For large tables where natural ordering doesn't provide good pruning, you can define clustering keys to reorganize the data layout over time. 💡 **Exam Tip**: `SELECT COUNT(*) FROM large_table` runs instantly in Snowflake because it uses metadata cache — no virtual warehouse required. This often surprises candidates who expect a full scan. --- ## Domain 4: Data Loading and Unloading (15%) **Stages**: Named storage locations from which Snowflake can load data. | Stage Type | Description | When to Use | |---|---|---| | User Stage (`@~`) | Personal stage for each user | Quick personal uploads | | Table Stage (`@%tablename`) | Each table has one; deleted with table | Simple, single-table loads | | Named Internal Stage | You create and manage it | Production pipelines | | Named External Stage | Points to S3/Azure/GCS | Loading from external cloud storage | **COPY INTO** is the primary bulk loading command: ```sql -- Load CSV from internal stage COPY INTO orders (id, customer, amount, ts) FROM @internal_stage/orders/ FILE_FORMAT = (TYPE = 'CSV' SKIP_HEADER = 1 FIELD_DELIMITER = ',') ON_ERROR = 'SKIP_FILE'; -- Unload to external stage COPY INTO @my_external_stage/output/ FROM (SELECT * FROM orders WHERE order_date >= '2026-01-01') FILE_FORMAT = (TYPE = 'PARQUET') HEADER = TRUE; ``` **Snowpipe**: Continuous micro-batch loading. Triggered by event notifications (S3 SQS, Azure Event Grid, GCS Pub/Sub). Each Snowpipe has an associated COPY INTO statement. **File format options you must know**: `SKIP_HEADER`, `FIELD_DELIMITER`, `FIELD_OPTIONALLY_ENCLOSED_BY`, `NULL_IF`, `EMPTY_FIELD_AS_NULL`, `STRIP_OUTER_ARRAY` (for JSON). --- ## Domain 5: Data Transformations (20%) **CTAS (Create Table As Select)**: ```sql CREATE TABLE orders_summary AS SELECT customer_id, SUM(amount) AS total, COUNT(*) AS order_count FROM orders GROUP BY customer_id; ``` **UDFs (User-Defined Functions)**: Scalar functions written in SQL, Python, Java, or JavaScript. Run per-row within a query: ```sql CREATE FUNCTION clean_phone(phone STRING) RETURNS STRING LANGUAGE SQL AS $$ REGEXP_REPLACE(phone, '[^0-9]', '') $$; ``` **UDTFs (User-Defined Table Functions)**: Return a set of rows per input row. Useful for parsing semi-structured data or generating multiple output rows from one input. **Stored Procedures**: Imperative logic in Snowflake. Support control flow (loops, conditionals), DDL execution, and multi-statement transactions. Can be written in JavaScript, Python, Scala, Java, or SQL scripting. ```sql CREATE PROCEDURE refresh_summary() RETURNS STRING LANGUAGE SQL AS $$ DECLARE result STRING; BEGIN TRUNCATE TABLE orders_summary; INSERT INTO orders_summary SELECT customer_id, SUM(amount) FROM orders GROUP BY 1; result := 'Done: ' || CURRENT_TIMESTAMP()::STRING; RETURN result; END; $$; ``` **Streams**: CDC objects that track changes on a table. Used with Tasks to implement incremental ELT pipelines. **Tasks**: Scheduled or dependency-based execution of SQL or stored procedure calls. --- ## Domain 6: Data Protection and Data Sharing (15%) **Time Travel**: Query, clone, or restore data from a past point in time. ```sql -- Query data from 2 hours ago SELECT * FROM orders AT (OFFSET => -7200); -- Clone table to state before an accidental DELETE CREATE TABLE orders_backup CLONE orders BEFORE (STATEMENT => '019a8b...'); -- Restore a dropped table UNDROP TABLE orders; ``` **Fail-safe**: 7-day recovery window after Time Travel expires, managed by Snowflake. User cannot access fail-safe data directly. **Secure Data Sharing**: Share live Snowflake data with other Snowflake accounts without copying: ```sql -- Create a share CREATE SHARE analytics_share; GRANT USAGE ON DATABASE analytics_db TO SHARE analytics_share; GRANT USAGE ON SCHEMA analytics_db.public TO SHARE analytics_share; GRANT SELECT ON TABLE analytics_db.public.orders TO SHARE analytics_share; -- Add a consumer account ALTER SHARE analytics_share ADD ACCOUNTS = aws_us_east_1.consumer_account; ``` **Data Marketplace (Snowflake Marketplace)**: Publish or subscribe to third-party data products. Providers list data; consumers mount the share in their own account. Zero-copy, always live. **Reader Accounts**: Snowflake accounts created and managed by you for external consumers who don't have their own Snowflake account. You pay for their compute. --- ## 5-Week Study Plan | Week | Focus | |---|---| | 1 | Snowflake architecture: three layers, micro-partitions, virtual warehouses, editions, cloud support. Read the Snowflake Architecture guide. | | 2 | Domains 2 + 6: RBAC roles, user/role management, masking policies, row access policies, Time Travel, Fail-safe, data sharing. | | 3 | Domains 4 + 3: Stages, COPY INTO, Snowpipe, file formats. Performance: caching layers, pruning, clustering basics. | | 4 | Domain 5: UDFs, stored procedures, Streams, Tasks. Build at least one Streams+Tasks pipeline in a trial account. | | 5 | Full practice exams. Review wrong answers. Re-read Snowflake docs for weak areas. Final review of system roles and edition features. | **Hands-on time**: A free Snowflake trial account gives you 30 days and $400 in credits. Spend at least 5–10 hours doing hands-on work — load real files, create stages, build a masking policy, run COPY INTO with various options. --- ## Most Tested Concepts (Frequency Guide) Based on community reports and exam structure, these topics appear most frequently: 1. Virtual warehouse credit billing (per second, 60-second minimum) 2. Time Travel by edition (Standard = 1 day, Enterprise = up to 90 days) 3. COPY INTO ON_ERROR options 4. RBAC system role hierarchy and which role can do what 5. Stage types (user, table, named internal, named external) 6. Stream types (standard vs append-only) 7. Result cache invalidation conditions 8. Secure Data Sharing mechanics 9. Micro-partition pruning and when it applies 10. Snowpipe vs COPY INTO (when to use each) --- ## Final Tips 1. **Don't skip the edition comparison table.** Many questions hinge on "which edition supports this feature." 2. **Know your stage types.** `@~` (user), `@%tablename` (table), `@stagename` (named). Each has different visibility and persistence rules. 3. **Understand the difference between a Snowflake Data Share and the Snowflake Marketplace.** A direct share goes account-to-account. The Marketplace is a public/commercial listing. 4. **ACCOUNTADMIN is not the same as SYSADMIN.** SYSADMIN manages objects. ACCOUNTADMIN manages the account. You need ACCOUNTADMIN for warehouses and resource monitors, but SYSADMIN handles databases and tables. **[Start the SnowPro Core Practice Exam — 340 questions covering all 6 domains →](/exams/snowflake-snowpro-core-cof-c03-340-questions)**

Comments

Sign in to leave a comment.

No comments yet. Be the first!

Comments are reviewed before publication.