```mdx
Quick Summary
The Terraform Google Provider moves from v7.17.0 to v7.18.0 with 16 changes spanning one breaking change, ten new features, and five behavior fixes. The single breaking change affects google_backup_dr_backup_plan_association, where a top-level field that never held data has been removed and its functional equivalent relocated under rules_config_info. On the feature side, three entirely new resources land — google_dataplex_data_asset, google_firebase_ai_logic_prompt_template_lock, and google_logging_saved_query — alongside meaningful additions to AlloyDB, Cloud SQL, Firestore, and GKE On-Prem. Several persistent plan-noise bugs in google_container_cluster, google_cloud_run_v2_worker_pool, and google_container_node_pool are also resolved, which should reduce false diffs in existing pipelines.
---
Changes by Severity
🔴 Immediate Action Required
#### google_backup_dr_backup_plan_association — Field Removal
The top-level attribute last_successful_backup_consistency_time has been removed from google_backup_dr_backup_plan_association. Because this was an output-only field that the API never populated, no runtime data is lost — but any Terraform configuration or downstream tooling that references this attribute by name will produce a plan error after upgrading. The equivalent data is now surfaced at rules_config_info.last_successful_backup_consistency_time, which is the correct nesting level per the upstream API schema.
---
🟡 Plan Ahead
#### google_compute_service_attachment — show_nat_ips and nat_ips Temporarily Non-Functional
Due to an underlying API problem, the show_nat_ips and nat_ips fields on google_compute_service_attachment are now ignored by the provider. If your configuration relies on these fields to read or set NAT IP information, those values will not be applied or returned until the API issue is resolved and the provider re-enables them. Avoid building automation that depends on these fields for now.
#### google_compute_service_attachment — target_service Supports In-Place Updates
Previously, changing target_service on an existing google_compute_service_attachment would require resource replacement. The field now supports update-in-place, which reduces disruption for service attachment modifications. Review any lifecycle rules (e.g., create_before_destroy) you may have added as workarounds — they may no longer be necessary.
#### google_sql_user — New database_role and iam_email Fields
Two new fields, database_role and iam_email, have been added to google_sql_user to support Cloud SQL users backed by IAM and database-level roles. If you manage Cloud SQL IAM users outside Terraform today, importing those users and setting these fields will bring them under Terraform management. Plan the import carefully to avoid unintended user recreation.
#### google_alloydb_cluster — Backup DR Restore and Source Fields
Three new fields — restore_backupdr_backup_source, restore_backupdr_pitr_source, and backupdr_backup_source — have been added to google_alloydb_cluster. These enable Backup and DR-integrated restore workflows directly from Terraform. Teams using AlloyDB with Backup DR should evaluate whether existing restore runbooks can be migrated to use these fields for consistency.
---
🟢 Informational
#### New Resources
Three new resources are now available:
google_dataplex_data_asset — Allows Terraform management of Dataplex data assets, which represent data stored in Cloud Storage or BigQuery and registered within a Dataplex lake.google_firebase_ai_logic_prompt_template_lock — Provides management of prompt template locks in Firebase AI Logic, useful for controlling template mutability in production environments.google_logging_saved_query — Enables declarative management of saved queries in Cloud Logging, supporting consistent query sharing across teams.#### google_data_fusion_instance — patch_revision Field
A new patch_revision field has been added to google_data_fusion_instance. This allows you to pin or track the patch revision of a Data Fusion instance, which can be relevant for environments with strict change-control requirements.
#### google_firestore_index — skip_wait Field
The skip_wait field on google_firestore_index lets Terraform return immediately after submitting an index creation request, rather than polling until the index is fully built. Firestore index builds can take several minutes; this option is useful in CI pipelines where downstream steps do not depend on index readiness.
#### google_gkeonprem_vmware_cluster — skip_validations Field
Adding skip_validations to google_gkeonprem_vmware_cluster allows the provider to bypass certain pre-flight validation checks during cluster operations. This is intended for advanced scenarios where validations are known to produce false negatives — use with care in production.
#### Behavior Fixes
google_cloudbuild_trigger — Manual triggers can now be created without a source configuration block, which was previously rejected incorrectly.google_cloud_run_v2_worker_pool — A permadiff on scaling.scaling_mode has been resolved; plans that previously showed spurious changes to this field should now be stable.google_container_node_pool — A bug that blocked node pool creation when blue_green_settings was specified has been corrected.google_container_cluster — A perma-diff triggered by setting resource_limits when node autoprovisioning is disabled has been fixed, eliminating unnecessary plan noise.---
Migration Playbook
Addressing the `last_successful_backup_consistency_time` Breaking Change
1. Search your codebase for any reference to last_successful_backup_consistency_time at the top level of google_backup_dr_backup_plan_association resources or data sources:
```bash
grep -r "last_successful_backup_consistency_time" .
```
2. Update output references in your Terraform code. If you were reading this attribute via an output or local, change the path from the top-level field to the nested location:
```hcl
# Before
output "backup_consistency_time" {
value = google_backup_dr_backup_plan_association.example.last_successful_backup_consistency_time
}
# After
output "backup_consistency_time" {
value = google_backup_dr_backup_plan_association.example.rules_config_info[0].last_successful_backup_consistency_time
}
```
3. Check downstream consumers — CI scripts, monitoring dashboards, or other Terraform workspaces that use terraform output or remote state data sources referencing this field should be updated before the provider upgrade is applied.
4. Run terraform plan against a non-production workspace after updating references to confirm no remaining errors related to the removed field:
```bash
terraform plan -out=tfplan
```
5. Review state files if you have existing state entries for google_backup_dr_backup_plan_association. Because the field was always empty, no state migration is needed, but running terraform refresh can confirm the state reflects the new schema cleanly:
```bash
terraform apply -refresh-only
```
6. Upgrade the provider in your required_providers block once all references are updated:
```hcl
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 7.18.0"
}
}
}
```
7. Apply in production only after the plan output is clean in staging and all downstream consumers have been updated.
---
Verification Checklist
google_backup_dr_backup_plan_association.*.last_successful_backup_consistency_time at the top levelterraform plan produces no errors or unexpected replacements for google_backup_dr_backup_plan_association resources after upgradinggoogle_container_cluster resources with resource_limits and disabled node autoprovisioning no longer show a perma-diff in plan outputgoogle_container_node_pool resources using blue_green_settings apply successfully without errorsgoogle_cloud_run_v2_worker_pool resources with scaling.scaling_mode set show a stable plan (no spurious changes)google_compute_service_attachment configurations that reference show_nat_ips or nat_ips and document that these fields are currently non-functionalgoogle_cloudbuild_trigger manual triggers without a source block, run a plan to confirm creation succeeds without validation errorsgoogle_sql_user with the new database_role or iam_email fields, verify that existing IAM-backed users are imported before applying to avoid unintended recreation---
References
```