Skip to main content
Google Cloud 🇺🇸 · 12 min read

Google Cloud Professional Cloud Architect Exam Guide 2026

The Google Cloud Professional Cloud Architect (PCA) is Google's premier certification for designing and managing cloud solutions on GCP. This 2026 guide covers the exam format, the four case studies, key service topics, and a 6-week study plan to help you earn this highly respected credential.

The Google Cloud Professional Cloud Architect certification is one of the most respected cloud credentials in the industry, consistently ranking among the highest-paying IT certifications globally. It validates your ability to design, develop, and manage robust, secure, scalable, and highly available solutions using Google Cloud. Unlike AWS or Azure architect exams that rely purely on multiple-choice questions, the GCP PCA exam incorporates case studies that require you to apply architectural thinking to realistic business scenarios — making it both more challenging and more meaningful as a credential. This guide covers everything you need to know to prepare effectively in 2026.

Exam Format and Scoring

Before you can design solutions for Mountkirk Games, you need to know what you are walking into on exam day:

Detail Value
Number of Questions ~60 questions
Time Limit 2 hours (120 minutes)
Passing Score ~70% (Google does not publish the exact threshold)
Exam Cost $200 USD
Delivery Kryterion testing center or online proctored
Question Types Multiple choice and multiple select (some based on case studies)
Validity 2 years
Recommended Experience 3+ years of industry experience including 1+ year designing on GCP

Google uses a scaled scoring system and does not award partial credit. You receive a simple Pass or Fail result immediately after submitting, with a score report emailed afterward. The exam is notably harder than entry-level cloud exams — it requires synthesizing knowledge across multiple services simultaneously to answer scenario questions, not just recalling definitions. Candidates who score well in practice but fail the real exam almost always attribute it to underestimating the depth of case study analysis required.

💡 Pro Tip: Google publishes all four case studies on the official exam guide page before the exam. Read them carefully in advance and build architecture diagrams for each one during your study period. On exam day, you will have access to the case studies via a tab in the exam interface — but candidates who pre-memorized the key requirements answer case study questions significantly faster.

The Four Case Studies Explained

The case studies are published by Google and available to study before exam day. Each one presents a fictional company with specific business and technical requirements. A subset of exam questions will reference one of these companies and ask you to select the best GCP architecture to meet their stated requirements. Here is what you need to know about each:

Mountkirk Games

Mountkirk Games is a mobile gaming company migrating an existing multiplayer game backend to GCP. Key requirements include a globally distributed, low-latency multiplayer experience, a managed game server that can scale to handle variable player loads, real-time analytics on player behavior, and seamless global leaderboards. Core GCP services to associate with Mountkirk: GKE (containerized game backend), Cloud Spanner (globally distributed relational database for leaderboards), Cloud Bigtable (time-series game event data), Pub/Sub (real-time event streaming), and Managed Instance Groups (auto-scaling game servers). The company wants to minimize operational overhead, which typically signals managed services over self-managed.

Dress4Win

Dress4Win is a fashion and lifestyle social application company that needs to migrate its on-premises infrastructure to GCP. The application has a MySQL-heavy backend, a monolithic architecture that the company wants to modernize incrementally, and strict compliance requirements around personal data. Core associations: Cloud SQL for MySQL (lift-and-shift database migration), Cloud Storage (object storage for user-uploaded images), Identity Platform (user authentication), VPC Service Controls (data perimeter for sensitive user data), and App Engine or Cloud Run for gradually modernized application components. The migration is incremental — expect questions about hybrid connectivity (Cloud Interconnect, VPN) and gradual refactoring.

TerramEarth

TerramEarth manufactures heavy equipment (tractors, construction machinery) and wants to implement IoT-based predictive maintenance by collecting telemetry from hundreds of thousands of machines globally. Many machines operate in areas with intermittent connectivity. Key requirements: ingest large volumes of IoT telemetry, handle offline devices with batch uploads, analyze machine data for maintenance predictions, and provide dashboards for fleet managers. Core associations: Cloud IoT Core (device management and MQTT ingestion), Pub/Sub (event streaming), Dataflow (streaming and batch data transformation), BigQuery (analytics data warehouse), Vertex AI (predictive maintenance ML models), and Looker / Data Studio (dashboards). This case study frequently generates questions about batch vs streaming processing decisions.

Helicopter Racing League

The Helicopter Racing League (HRL) streams live drone racing events globally and wants to improve video streaming quality, reduce latency for viewers, and use AI to enhance live commentary and highlight generation. Core associations: Cloud CDN with Media CDN (low-latency global video delivery), Transcoder API (video encoding and packaging), Vertex AI with Video Intelligence API (automated highlight detection), Cloud Run (event-driven serverless processing), and Anthos (hybrid deployment for on-site race infrastructure). Expect questions about trade-offs between live streaming quality and cost, and how to use AI APIs vs custom-trained models.

Key GCP Topics to Master

Beyond the case studies, the PCA exam tests broad and deep knowledge of the GCP service catalog. The following service areas generate the most questions:

Google Kubernetes Engine (GKE)

GKE is central to modern GCP architectures. Know the difference between Standard and Autopilot modes, node auto-provisioning, Workload Identity (the secure way to grant GKE workloads access to GCP APIs — never use service account key files), cluster networking modes (VPC-native with alias IP), and binary authorization for supply chain security.

BigQuery

The flagship GCP analytics service. Understand partitioning (by ingestion time, date/timestamp, or integer) and clustering to optimize cost and query performance. Know when to use BigQuery ML vs Vertex AI. Understand materialized views, authorized datasets, column-level security with data policies, and BigQuery Omni for multi-cloud analytics.

IAM and Resource Hierarchy

GCP's resource hierarchy (Organization > Folders > Projects > Resources) drives IAM policy inheritance. Know how to apply least privilege using predefined roles vs custom roles, when to use service accounts vs user accounts, how organization policies enforce controls across a resource hierarchy, and how IAM Conditions enable attribute-based access.

Networking: VPC, Cloud Load Balancing, and Interconnect

GCP VPCs are global by default (unlike AWS where VPCs are regional). Understand Shared VPCs for multi-project architectures, VPC Peering, Private Google Access, Cloud NAT, and the different load balancer types (External Application Load Balancer, Internal TCP/UDP Load Balancer, etc.). Cloud Interconnect (Dedicated vs Partner) and Cloud VPN with dynamic routing via Cloud Router are essential hybrid connectivity options.

Storage Selection

Knowing which storage class to recommend is a core architect skill on this exam:

Service Best For Key Characteristic
Cloud Storage (Standard) Frequently accessed objects, media, backups No minimum storage duration
Cloud Storage (Coldline) Data accessed less than once per year (DR archives) 90-day minimum; retrieval cost
Cloud SQL Managed relational DB (MySQL, PostgreSQL, SQL Server) Regional; use read replicas for scale
Cloud Spanner Globally distributed relational DB with strong consistency Expensive; use only when global scale + ACID needed
Cloud Bigtable IoT data, time-series, HBase workloads NoSQL wide-column; millisecond latency at TB+ scale
Firestore Document DB for mobile/web apps with real-time sync Serverless; scales to zero
BigQuery Analytical queries over large datasets (OLAP) Columnar storage; not for transactional workloads

3 Realistic Sample Questions

Question 1

Mountkirk Games needs to store global player leaderboards that must be strongly consistent, support thousands of reads and writes per second from players in multiple continents, and remain available during regional outages. Which GCP database service should the architect recommend?

  • A. Cloud SQL with read replicas in multiple regions
  • B. Cloud Spanner with a multi-region configuration
  • C. Cloud Bigtable with a multi-cluster routing policy
  • D. Firestore in Native mode with multi-region replication

Correct Answer: B

Explanation: Cloud Spanner is the only GCP service that provides globally distributed, strongly consistent relational storage with horizontal scaling. For leaderboards requiring accurate rankings (strong consistency) that must survive regional failures (multi-region) at high throughput, Cloud Spanner multi-region is the correct choice. Cloud SQL read replicas are asynchronous and do not provide global strong consistency. Cloud Bigtable offers global replication but eventual consistency, which is insufficient for accurate leaderboard rankings. Firestore provides strong consistency within a region but its multi-region replication uses eventual consistency for cross-region writes.

Question 2

TerramEarth needs to grant a Dataflow pipeline running in a GCP project access to read from a Cloud Storage bucket in the same project. What is the MOST secure way to grant this access?

  • A. Create a service account key file, download it, and configure the Dataflow job to use it as an environment variable
  • B. Grant the Compute Engine default service account the Storage Object Viewer role on the bucket
  • C. Create a dedicated service account with the Storage Object Viewer role on the bucket and attach it to the Dataflow job
  • D. Make the Cloud Storage bucket publicly readable to avoid needing authentication

Correct Answer: C

Explanation: The recommended practice is to create a dedicated service account with only the minimum required permissions (Storage Object Viewer) and attach it specifically to the Dataflow job. This follows the principle of least privilege and avoids shared credentials. Downloading a service account key file (A) creates a long-lived credential that can be stolen — Google's own guidance actively discourages key file usage in favor of workload identity. Using the default Compute Engine service account (B) violates least privilege since that account is shared across many resources. Making the bucket publicly readable (D) would expose sensitive telemetry data to the internet, which is unacceptable for machine data.

Question 3

A company's development team wants to deploy containerized microservices on GCP with minimal operational overhead. The services experience highly variable traffic, with some services receiving zero requests for hours at a time. The company wants to pay only for actual request processing time. Which GCP service is MOST appropriate?

  • A. Google Kubernetes Engine (GKE) Standard with Horizontal Pod Autoscaler
  • B. Compute Engine Managed Instance Groups with autoscaling
  • C. Cloud Run
  • D. GKE Autopilot with cluster autoscaler

Correct Answer: C

Explanation: Cloud Run is the ideal service for containerized workloads with highly variable or spiky traffic because it scales to zero when there are no requests, charges only for the time requests are actively being processed, and requires zero cluster management. The requirement to "pay only for actual request processing time" is the direct signal for Cloud Run. GKE Standard (A) requires managing node pools and incurs costs for idle nodes. Compute Engine MIGs (B) have similar idle cost issues and require more operational management. GKE Autopilot (D) reduces operational overhead significantly and scales down efficiently, but it does not scale to zero — at least one pod must remain running to avoid cold starts, meaning you pay for minimum capacity even during idle periods.

ACE First or Direct PCA?

A common debate among candidates is whether to take the Associate Cloud Engineer (ACE) exam before attempting the PCA, or go directly for the Professional cert. Here is a practical framework for deciding:

Take ACE First If:

  • You are new to GCP and have fewer than 6 months of hands-on experience
  • You are not yet comfortable with gcloud CLI commands and basic GCP console navigation
  • You want a stepping-stone credential while building your confidence
  • Your employer will reimburse both certifications

Go Direct to PCA If:

  • You already have 1+ year of GCP hands-on experience
  • You hold AWS or Azure professional-level certifications (the conceptual transfer is significant)
  • You are studying full-time and want to earn the higher-value credential sooner
  • The PCA is a specific job requirement or resume priority

The ACE exam covers a subset of PCA content, so studying for the PCA covers everything in the ACE blueprint and more. Candidates who prepare for the PCA and discover they are not yet ready can sit the ACE as a confidence builder without starting over.

6-Week Study Plan

Week 1: GCP Foundations and Case Study Reading
  • Set up a GCP free tier account and complete the GCP fundamentals Qwiklabs quest
  • Read all four case studies on the official exam guide page — take detailed notes on requirements
  • Study IAM, resource hierarchy, organization policies, and billing structure
  • Complete the "Google Cloud Fundamentals: Core Infrastructure" course on Coursera
Week 2: Compute and Containers
  • Deep dive into GKE: deploy a multi-tier application on a Standard and Autopilot cluster
  • Study Cloud Run, Cloud Functions, and App Engine — know when to choose each
  • Practice Compute Engine: instance templates, managed instance groups, autoscaling policies
  • Map compute services to case study companies (which company uses which service and why)
Week 3: Storage, Databases, and Data Services
  • Master the storage selection framework: Cloud Storage tiers, Cloud SQL, Spanner, Bigtable, Firestore, BigQuery
  • Study Pub/Sub, Dataflow, Dataproc, and Cloud Composer for data pipeline patterns
  • Complete a hands-on BigQuery lab: partition, cluster, and query a large public dataset
  • Build architecture diagrams for TerramEarth and Mountkirk Games data layers
Week 4: Networking and Security
  • VPC design: Shared VPC, VPC peering, Private Google Access, Cloud NAT
  • Load balancing types and their use cases: HTTP(S), TCP/SSL Proxy, Internal, Network
  • Hybrid connectivity: Cloud Interconnect vs Cloud VPN with BGP routing
  • Security: VPC Service Controls, Cloud Armor, Cloud KMS, Secret Manager, Security Command Center
Week 5: AI/ML, Monitoring, and SRE Practices
  • Vertex AI: AutoML, custom training, model deployment, Vertex AI Pipelines
  • Cloud Monitoring, Cloud Logging, Error Reporting, Cloud Trace, Cloud Profiler
  • SRE concepts: SLI, SLO, SLA, error budgets — know how to design and monitor them
  • Take a full-length 60-question practice exam and review every wrong answer
Week 6: Case Study Deep Dive and Final Practice
  • For each of the four case studies, write out a complete architecture with service selection rationale
  • Practice answering case study questions under timed conditions (2 minutes per question)
  • Take a second full-length practice exam — target 75%+ before scheduling the real exam
  • Light review of flagged topics the day before; no new material 24 hours before the exam
💡 Pro Tip: Google Cloud Skills Boost (formerly Qwiklabs) offers a "Professional Cloud Architect" learning path with hands-on labs that directly map to exam topics. The labs use real GCP environments — you are not working in a simulator. Completing these labs before the exam significantly increases your ability to answer "how would you implement this" questions, which appear frequently on the PCA.

Ready to Practice?

Test your knowledge with our full Google Cloud Professional Cloud Architect practice exam — 340 scenario-based questions, detailed explanations, and no login required to get started.

Browse Practice Exams →

Comments

Sign in to leave a comment.

No comments yet. Be the first!

Comments are reviewed before publication.