Nutanix is a heavy-hitter in the enterprise data center, but too often admins are stuck clicking through Prism or maintaining hand-crafted scripts to manage workloads. If you’ve already bought into infrastructure as code for your cloud resources, there’s no reason your on-prem Nutanix environment should be any different.
In this post, I’ll walk through setting up the Nutanix Terraform provider, looking up resources dynamically, and deploying VMs with code you can version and share. No hardcoded UUIDs, no plaintext passwords — just patterns you can actually use in production.
Why Terraform for Nutanix?
- Consistency — The same workflow you use for Azure or AWS works for Nutanix. One language, one plan/apply cycle.
- Drift detection —
terraform plantells you when someone made manual changes in Prism. That alone is worth the setup. - Git-driven change control — Every infrastructure change goes through a PR. You get a review, an audit trail, and the ability to roll back.
- Scalability — Deploy one VM or fifty with the same code. Add a new cluster and it’s a variable change, not a new workflow.
Provider Setup
Install Terraform
If you don’t have Terraform yet:
# macOS
brew install terraform
# Linux
sudo apt-get update && sudo apt-get install -y terraform
# Verify
terraform -version
Configure the Provider
The Nutanix provider connects to Prism Central (recommended) or Prism Element. Here’s the setup with proper credential handling — no passwords in code:
provider.tf
terraform {
required_providers {
nutanix = {
source = "nutanix/nutanix"
version = "~> 2.0"
}
}
}
provider "nutanix" {
username = var.nutanix_username
password = var.nutanix_password
endpoint = var.nutanix_endpoint
insecure = true # Set to false if you have valid TLS certs
}
variables.tf
variable "nutanix_username" {
description = "Prism Central username"
type = string
sensitive = true
}
variable "nutanix_password" {
description = "Prism Central password"
type = string
sensitive = true
}
variable "nutanix_endpoint" {
description = "Prism Central IP or FQDN"
type = string
}
Set credentials via environment variables (never in .tfvars files that might get committed):
export TF_VAR_nutanix_username="admin"
export TF_VAR_nutanix_password="your-password-here"
export TF_VAR_nutanix_endpoint="10.10.10.10"
terraform init
A few notes:
- Use
~> 2.0instead of pinning an exact version. This allows patch updates while preventing breaking changes. insecure = trueskips TLS verification. Most lab and even production Prism deployments use self-signed certs. If you’ve deployed proper certs, set this tofalse.sensitive = trueon variables prevents Terraform from showing credentials in plan output or logs.
Looking Up Resources with Data Sources
This is the pattern that makes Nutanix Terraform actually usable. Instead of copy-pasting UUIDs from Prism, you look them up dynamically:
Find Your Cluster
data "nutanix_clusters_v2" "all" {}
data "nutanix_clusters_v2" "target" {
filter = "name eq '${var.cluster_name}'"
}
output "cluster_id" {
value = data.nutanix_clusters_v2.target.cluster_entities[0].ext_id
}
Find a Subnet
data "nutanix_subnets_v2" "vm_subnet" {
filter = "name eq '${var.subnet_name}'"
}
Find an Image
data "nutanix_images_v2" "os_image" {
filter = "name eq '${var.image_name}'"
}
Why this matters: UUIDs change between clusters. If you hardcode them, your Terraform only works on one cluster. With data sources, the same code works across dev, staging, and production — just change the variable values.
Add the lookup variables:
variable "cluster_name" {
description = "Target Nutanix cluster name"
type = string
default = "cluster-1"
}
variable "subnet_name" {
description = "VM subnet name"
type = string
default = "VM-Network"
}
variable "image_name" {
description = "OS image name in Prism"
type = string
default = "ubuntu-22.04-server"
}
Deploying a Virtual Machine
Here’s a complete VM deployment that ties the data sources together:
main.tf
resource "nutanix_virtual_machine_v2" "web_server" {
name = var.vm_name
description = "Deployed via Terraform"
num_sockets = var.vm_cpu
num_cores_per_socket = 1
cluster {
ext_id = data.nutanix_clusters_v2.target.cluster_entities[0].ext_id
}
disks {
disk_address {
bus_type = "SCSI"
index = 0
}
backing_info {
vm_disk {
reference {
image_reference {
image_ext_id = data.nutanix_images_v2.os_image.image_entities[0].ext_id
}
}
disk_size_bytes = var.disk_size_gb * pow(1024, 3)
storage_container {
ext_id = data.nutanix_clusters_v2.target.cluster_entities[0].ext_id
}
}
}
}
nics {
network_info {
subnet {
ext_id = data.nutanix_subnets_v2.vm_subnet.subnet_entities[0].ext_id
}
}
}
boot_config {
uefi_boot {
boot_order = ["DISK", "NETWORK", "CDROM"]
}
}
}
variables.tf (add to existing):
variable "vm_name" {
description = "Name for the VM"
type = string
default = "tf-web-01"
}
variable "vm_cpu" {
description = "Number of vCPU sockets"
type = number
default = 2
}
variable "disk_size_gb" {
description = "Boot disk size in GB"
type = number
default = 40
}
outputs.tf
output "vm_id" {
description = "The ext_id of the created VM"
value = nutanix_virtual_machine_v2.web_server.ext_id
}
Deploy It
# Always plan first
terraform plan
# Review the output, then apply
terraform apply
The plan output will show you exactly what’s being created — VM name, CPU, disk size, which cluster and subnet. Review it before typing yes.
Uploading an Image
If your OS image isn’t in Prism yet, Terraform can upload it:
resource "nutanix_images_v2" "ubuntu" {
name = "ubuntu-22.04-server"
description = "Ubuntu 22.04 LTS server image"
type = "DISK_IMAGE"
source {
url_source {
url = "https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img"
}
}
}
A heads-up: image uploads can take a while depending on the image size and network speed. Terraform will wait for the upload to complete, so don’t panic if terraform apply sits on this resource for a few minutes.
Project Structure
For anything beyond a quick test, organize your files:
nutanix-terraform/
├── provider.tf # Provider config and version constraints
├── variables.tf # All variable declarations
├── terraform.tfvars # Non-sensitive variable values (committed)
├── main.tf # VM and resource definitions
├── data.tf # Data source lookups
├── outputs.tf # Output values
└── README.md
Keep terraform.tfvars for non-sensitive values (cluster names, VM specs) and use environment variables for credentials. Add *.tfstate and *.tfstate.backup to your .gitignore — state files contain sensitive information.
Common Pitfalls
| Problem | Cause | Fix |
|---|---|---|
401 Unauthorized | Wrong credentials or user lacks permissions | Verify creds; use a Prism Central admin or a service account with the right role |
Filter returned 0 entities | Data source name doesn’t match exactly | Check Prism for the exact name — filters are case-sensitive |
| Provider version conflicts | Pinned to exact version that doesn’t support your PC version | Use ~> 2.0 version constraint |
Slow terraform plan | Provider queries Prism for every data source on every plan | Normal for large clusters; consider -target for focused operations |
| State drift after Prism UI changes | Someone modified the VM manually | Run terraform plan regularly to detect drift; consider terraform import for existing resources |
What’s Next
This covers the foundation — authenticated provider setup, dynamic resource lookups, and VM deployment. From here, you can:
- Modularize — Wrap the VM deployment in a module so your team can deploy VMs with
module "web_server" { source = "./modules/vm" ... } - Add cloud-init — Pass
guest_customizationto configure the VM on first boot (hostname, SSH keys, packages) - Manage at scale — Use
for_eachto deploy multiple VMs from a map of names and specs - Integrate with Ansible — Use Terraform to provision the VM, then Ansible to configure it
We’ll go deeper on the Nutanix provider in a future post covering advanced patterns — categories, protection domains, and multi-cluster deployments.
Happy automating!