Azure DevOps and Terraform Integration
Easy Guide to Integrate Azure DevOps with Terraform for Efficient IAC
Integrating Terraform with Azure DevOps allows organizations to harness the power of Infrastructure as Code (IaC) for streamlined, automated deployments in the cloud. By leveraging Terraform's capabilities within Azure DevOps pipelines, teams can manage infrastructure efficiently, reduce manual errors, and maintain consistent environments across development, staging, and production. This synergy between Terraform and Azure DevOps enables seamless provisioning, management, and scaling of resources, ensuring that infrastructure changes are deployed with the same rigor and reliability as application code, ultimately driving operational excellence and innovation.
In this article, we will focus on integrating Terraform workflows with the Azure DevOps pipeline to enhance infrastructure automation. If you want to learn Terraform from the basics, please refer to my blog on Terraform fundamentals.
The Terraform workflow typically follows a series of steps designed to manage infrastructure as code effectively. Here's an overview of the key stages:
Write:
In this initial stage, you define your infrastructure using HashiCorp Configuration Language (HCL) in Terraform files. These files describe the resources and configurations needed for your cloud environment, including virtual machines, networks, storage, and more. The infrastructure code is stored in version control systems like Git to ensure collaboration and tracking of changes.Initialize (terraform init):
Before applying your configurations, you need to initialize your Terraform working directory. This step downloads the necessary provider plugins (e.g., for Azure and AWS) and sets up the environment for Terraform to run. This is typically the first command executed in a Terraform workflow.Plan (terraform plan):
Theterraform plan
command generates an execution plan, detailing the actions Terraform will take to reach the desired state of your infrastructure. It shows what resources will be created, modified, or destroyed without making any actual changes. This step is crucial for reviewing and validating the changes before applying them.Apply (terraform apply):
After reviewing the plan, theterraform apply
the command is used to execute the changes. Terraform interacts with the cloud provider's API to create, update, or delete resources as defined in your configuration files. The applied changes bring your infrastructure to the desired state.Manage and Evolve:
Once the infrastructure is deployed, you can continue to manage and evolve it by modifying the Terraform configurations. Changes are tracked through version control, and the workflow cycles through planning and applying updates. Terraform maintains a state file that records the current state of your infrastructure, enabling it to track changes and ensure consistency.Destroy (terraform destroy):
When resources are no longer needed, theterraform destroy
command can be used to tear down the entire infrastructure or specific resources. This command helps clean up and manage costs by removing unused resources.
The Terraform workflow can be automated using continuous integration/continuous deployment (CI/CD) pipelines in platforms like Azure DevOps. This automation ensures that infrastructure changes are consistently and reliably deployed, reducing the potential for human error.
Azure CLI Local Setup
Terraform supports multiple ways of authentication to Azure which are:
Authenticating to Azure using a Service Principal and a Client Certificate
Authenticating to Azure using a Service Principal and a Client Secret
Our demo will use Authenticating to Azure using a Service Principal and a Client Secret. You can get more information about the authentication process on azurerm terraform documentation.
Integrating Terraform with Azure CLI is crucial because it simplifies and secures the process of managing Azure resources. By using Azure CLI for authentication, Terraform can seamlessly interact with your Azure environment without needing to manage separate credentials, reducing the risk of exposure. This integration enables Terraform to leverage existing Azure CLI configurations, such as active subscriptions and managed identities, streamlining the deployment process and ensuring consistent, secure access to Azure resources.
1. Install Azure CLI
Ensure that Azure CLI is installed on your machine. You can install it by following Azure CLI Installation Guide.
Verify the installation by running:
az --version
2. Install Terraform
Install Terraform on your machine. You can download it from the official Terraform website.
Verify the installation by running:
terraform --version
3. Authenticate Azure CLI
Log in to Azure using Azure CLI:
az login
After executing the command, a new browser window will open, directing you to the Azure sign-in page. On the sign-in page, select the Azure account you want to use. If you have multiple accounts, you can choose the appropriate one.
Once you are logged in successfully it will display the account details on your Local CLI as below
If you work with multiple subscriptions, set the active subscription:
az account set --subscription "your-subscription-id"
4. Create Service Principal
We can now create the Service Principal which will have permission to manage resources in the specified Subscription using the following command:
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/0000000-9342-4b5d-bf6b-5456d8fa879d"
You will be getting the output having below 4 values
{
"appId": "0000000-20be-41c8-bad0-60299ed476ae",
"displayName": "azure-cli-2024-08-18-10-13-02",
"password": "XXXXXXXXXXXXX",
"tenant": "bbb-xxxx-zzzzz"
}
These values map to the Terraform variables like so:
appId
is theclient_id
defined above.password
is theclient_secret
defined above.tenant
is thetenant_id
defined above.
5. Login using Service Principle
Now the service principal has the contributor role assigned hence we need to log in again using the created service principal
az login --service-principal -u CLIENT_ID -p CLIENT_SECRET --tenant TENANT_ID
Once execute the above command with appropriate values, you will get an output as below where you will be logged in via the service principal
With the service principal credentials, Terraform can now communicate with Azure to create, modify, and destroy infrastructure as defined in your Terraform scripts.
6. Configuring the Service Principal in Terraform
As we've obtained the credentials for this Service Principal - it's possible to configure them in a few different ways.
When storing the credentials as Environment Variables, for example:
# sh
export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
export ARM_CLIENT_SECRET="12345678-0000-0000-0000-000000000000"
export ARM_TENANT_ID="10000000-0000-0000-0000-000000000000"
export ARM_SUBSCRIPTION_ID="20000000-0000-0000-0000-000000000000"
These values can also be hard-coded under the provider block in the Terraform manifest. However, it is not recommended to hard-code secret credentials as plain text. Instead, we can use environment variables as mentioned above or store them in Azure Key Vault.
Outcome
From now on, whenever you run terraform apply
or terraform destroy
, Terraform will authenticate with Azure using this service principal. This setup ensures secure and automated infrastructure management, aligned with best practices for cloud authentication.
Create a Terraform Configuration
provider.tf
Start by defining your infrastructure in a .tf
file:
# We strongly recommend using the required_providers block to set the # Azure Provider source and version being used terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "=3.0.0" } } } # Configure the Microsoft Azure Provider provider "azurerm" { # here you can hard code your azurerm credentials such as tenant_id, subscription_id, client_id, client_secret etc.. however, it is not at all recomended to hardcode secret credentials as plain text. We can use environment variables or store them in azure key-vault. features {} }
Initialize Terraform:
terraform init
Apply the configuration:
terraform apply
Best Practices
Use Managed Identity: If running Terraform from within Azure (e.g., Azure DevOps), consider using a Managed Identity to handle authentication automatically without needing service principals.
State Management: Use remote state management (e.g., Azure Storage) to securely store your Terraform state files.
Environment Variables: Terraform can also use Azure credentials stored in environment variables (
ARM_CLIENT_ID
,ARM_CLIENT_SECRET
,ARM_TENANT_ID
,ARM_SUBSCRIPTION_ID
), which are useful in CI/CD pipelines.
NB: You can access the terraform manifests created as part of this article in the below repository
Click here to access Terraform Manifests
Azure DevOps Integration
So far, all steps in our infrastructure provisioning process have been performed locally using the Azure CLI and a local Terraform installation. While this approach is effective for initial testing and development, fully automating the process is crucial for consistent, repeatable, and scalable infrastructure deployments.
To achieve full automation, we will leverage Azure DevOps pipelines to handle all Terraform operations, including initialization, planning, application, and destruction of infrastructure. This approach ensures that infrastructure provisioning is integrated into our CI/CD processes, providing version control, automated testing, and streamlined deployment.
Project Creation and setup
Integrating GitHub with Azure DevOps for Pipeline Automation
To streamline our infrastructure provisioning process, we’ve created a new Azure DevOps project. In this project, we will integrate our Terraform code hosted on GitHub and proceed with creating an automated pipeline.
Steps to Integrate GitHub and Set Up the Pipeline:
Access Project Settings:
Navigate to the newly created Azure DevOps project.
In the project, click on Project settings located in the bottom-left corner of the Azure DevOps interface.
Configure GitHub Connection:
Under Pipelines, select GitHub connections.
You should see your GitHub repository already listed since it's been added beforehand.
Connect to the GitHub Repository:
If the repository is not already connected, click Add Connection, then select your GitHub repository from the list.
Authenticate with GitHub if prompted, and grant Azure DevOps the necessary permissions to access your repository.
Set Up the Pipeline:
Now that your GitHub repository is connected, go back to the Pipelines section in Azure DevOps.
Click on Pipelines > New pipeline.
Select GitHub as the repository source.
Choose the appropriate repository where your Terraform code is stored.
Follow the prompts to set up your pipeline, starting with a basic YAML pipeline or importing an existing one.
Configure the Pipeline for Terraform Operations:
Install Terraform Extention
At the Organization setting click on the extension and then browse the extension. Search for Terraform and install below two extensions below following the further prompts:
Once installed you will be able to see the below assistance while writing the pipeline code
In the pipeline configuration, define the steps for Terraform initialization, planning, and applying.
Ensure the pipeline includes the correct service connections and variables required for Terraform to authenticate with Azure and manage resources.
Pipeline Creation
Build Pipeline
In our Terraform build pipeline, the primary objective is to automate the essential steps of infrastructure provisioning, ensuring consistency, compliance, and repeatability. The pipeline is designed to carry out key Terraform operations, leading up to the creation and storage of the tfstate
file as an artifact. This artifact will then be utilized in the release pipeline for further stages of infrastructure deployment.
Pipeline Stages
Terraform Initialization (
terraform init
):The first stage in our pipeline is initializing Terraform. This step configures the backend and prepares the environment for Terraform operations. It ensures that Terraform has the necessary plugins and access to the remote state file.
trigger: - main pool: name: Default stages: - stage: Terraform jobs: - job: Build steps: - task: TerraformTaskV4@4 displayName: Terraform Init inputs: provider: 'azurerm' command: 'init' backendServiceArm: 'Pay-As-You-Go(4accce4f-9342-4b5d-bf6b-5456d8fa879d)' backendAzureRmResourceGroupName: 'storage-rg' backendAzureRmStorageAccountName: 'storetfaccritesh' backendAzureRmContainerName: 'statefilestore' backendAzureRmKey: 'prod.terraform.tfstate'
Terraform Validation (
terraform validate
):The pipeline then validates the Terraform configuration files to ensure they are syntactically correct and consistent with the defined standards. This step is crucial to catch any errors before they propagate further in the pipeline.
- task: TerraformTaskV4@4 displayName: Terraform Validate inputs: provider: 'azurerm' command: 'validate'
Terraform Formatting (
terraform fmt
):Formatting is an important practice to maintain a consistent code style across the team. This step automatically formats the Terraform configuration files according to the standard convention, improving readability and collaboration.
- task: TerraformTaskV4@4 displayName: Terraform Format inputs: provider: 'azurerm' command: 'custom' outputTo: 'console' customCommand: 'fmt' environmentServiceNameAzureRM: 'Pay-As-You-Go(4accce4f-9342-4b5d-bf6b-5456d8fa879d)'
Terraform Plan (
terraform plan
):This stage generates an execution plan, outlining the changes Terraform will make to the infrastructure. The plan is saved to a file, providing a preview of the modifications before any resources are applied.
- task: TerraformTaskV4@4 displayName: Terraform Plan inputs: commandOptions: '-out $(Build.SourcesDirectory)/tfplanfile' # this will save the plan to a file name tfplanfile provider: 'azurerm' command: 'plan' environmentServiceNameAzureRM: 'Pay-As-You-Go(4accce4f-9342-4b5d-bf6b-5456d8fa879d)'
Archiving the
tfstate
File:Once the Terraform plan is created, the pipeline archives the Terraform state (
tfstate
) file, which contains the latest state of the infrastructure. This file is crucial for managing the lifecycle of the resources and is stored as an artifact.- task: Bash@3 displayName: Install zip utility inputs: targetType: 'inline' script: 'sudo apt-get update && sudo apt-get install -y zip' - task: ArchiveFiles@2 displayName: Archive Files inputs: rootFolderOrFile: '$(Build.SourcesDirectory)' includeRootFolder: true archiveType: 'zip' archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip' replaceExistingArchive: true
Publishing the
tfstate
Artifact:Finally, the archived
tfstate
file is published as an artifact, making it available for the release pipeline. This ensures that the release pipeline has access to the accurate and up-to-date state of the infrastructure for further deployment stages.- task: PublishBuildArtifacts@1 displayName: Publish Artifact inputs: PathtoPublish: '$(Build.ArtifactStagingDirectory)' ArtifactName: '$(Build.BuildId)-build' publishLocation: 'Container'
End Goal
The build pipeline culminates in the creation and preservation of the tfstate
file, a critical component for managing infrastructure as code (IaC) in Terraform. By archiving and publishing this state file as an artifact, we enable a seamless transition to the release pipeline, where the actual deployment and management of cloud resources will take place.
This approach not only ensures a structured and automated workflow for infrastructure provisioning but also enhances collaboration and reliability by maintaining the integrity of the Terraform state throughout the CI/CD process.
Release Pipeline
The release pipeline is designed to take the output generated from the build pipeline—specifically, the Terraform plan encapsulated within the tfstate
file—and proceed to the deployment phase. This process ensures that the infrastructure changes are thoroughly reviewed and approved before being applied to the environment.
Pipeline Stages:
Fetch Artifact from Build Pipeline:
The release pipeline begins by retrieving the artifact generated in the build pipeline. This artifact contains the Terraform plan and the
tfstate
file, which captures the desired state of the infrastructure. In the screenshot below, you can see that the build has been configured along with a continuous release trigger.
Agent Configuration
This task will acquire our self-hosted agent on which it will execute the further terraform operations.
Unarchiving Build Artifact
At this stage, we are unarchiving the zipped artifacts downloaded from the build pipeline.
Terraform Init
The
Terraform Init
stage in the release pipeline is crucial for ensuring that Terraform is properly configured and ready to apply the infrastructure changes. This stage is necessary because the release pipeline may be executed on a different machine, container, or pod than the build pipeline. In such cases, Terraform needs to be initialized in the new environment before it can apply the generated artifact.The rest of the configuration will be the same as the build pipeline.
Apply Terraform Apply:
After the artifact is fetched, the pipeline is set to run the
terraform apply
command using thetfstate
file. This step will execute the planned changes, provisioning or updating the infrastructure according to the specifications defined in the Terraform code.After the successful initialization of Terraform, the next crucial step in the release pipeline is to apply the infrastructure changes. Since the Terraform plan (
tfstate
file) has already been downloaded as an artifact from the build pipeline, we can proceed directly to theapply
task. This task will apply the changes defined in the plan to the target environment. To streamline the process, we include the--auto-approve
flag in the apply command.
Post-Approval Execution:
Before applying the changes, the pipeline includes an approval gate. This requires a manual review and approval from the designated stakeholders, ensuring that the infrastructure changes are scrutinized and verified before execution. Once the approval is granted, the pipeline proceeds with the application of the Terraform plan.
Outcome:
The release pipeline ensures a controlled and automated deployment process. By fetching the artifact from the build pipeline and running the terraform apply
post-approval, we maintain a high level of governance and security over infrastructure changes. This setup allows for a smooth and reliable transition from planning to execution, adhering to best practices in continuous deployment and infrastructure as code (IaC).
Destroy Stage
After the infrastructure has been successfully deployed, there may be scenarios where we need to clean up the resources or recreate them from scratch. To facilitate this, we’re incorporating a Terraform Destroy
stage in the release pipeline. This stage ensures that any unnecessary or outdated infrastructure can be safely and efficiently decommissioned.
Purpose of the Destroy Stage:
Resource Cleanup: The
destroy
stage is crucial for cleaning up resources that are no longer needed, helping to minimize costs and maintain a clean environment.Infrastructure Rebuild: In cases where you need to recreate the infrastructure, the destroyed stage allows for a complete teardown before the infrastructure is rebuilt, ensuring no residual configurations or resources are left behind.
All the stages will remain the same as the Deployment stage, but the apply task will be replaced by Destroy in the Destroy Stage as shown below.
Pre-Destroy Approval
An approval stage has been added before destroying the infrastructure so that the approved can review what resources are going to be impacted as part of this destruction.
End-to-End Execution
I have triggered an end-to-end run that will build the latest artifact, publish the artifact to the release pipeline, and then apply the infrastructure based on the generated plan artifact. Once approved, it can also destroy the same.
- The plan has been completed after triggering.
- Post completion, a release has been triggered and waiting for approval
- Post Approval the deployment has been started
- And now the infrastructure has been deployed with Terraform apply task completion
- You can see that 5 resources have been added which are being displayed on Azure Portal
- After completion, it is now awaiting approval for destruction.
- Once destruction is approved it will again proceed with destruction.
- Now you can see below destruction has started
Conclusion
And finally, after 3.37 minutes, the destruction was completed successfully. This marks the successful completion of our fully automated, end-to-end infrastructure management process, integrated seamlessly with Azure DevOps.
Throughout this journey, we’ve demonstrated how to build, publish, and apply infrastructure artifacts using Terraform within an Azure DevOps pipeline. From initializing Terraform in various environments to handling complex tasks such as artifact archiving, approval workflows, and automated cleanup, we've covered every step necessary to manage infrastructure efficiently and reliably. This process not only ensures that infrastructure changes are executed in a controlled and repeatable manner but also empowers teams to maintain agility and scalability in their cloud environments.
The ability to automate everything from provisioning to destruction, all within a unified pipeline, underscores the power of combining Terraform's infrastructure as code capabilities with the robust CI/CD features of Azure DevOps. This integration enables us to manage our cloud resources with precision, ensuring that deployments are consistent, auditable, and aligned with best practices.
Repository
Explore the code and configurations used in this setup on GitHub: Azure DevOps and Terraform Integration