<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[DevOps with Ritesh]]></title><description><![CDATA[A detail-oriented and results-driven DevOps professional with experience in streamlining software development processes with emerging DevOps & Cloud solutions]]></description><link>https://www.devopswithritesh.in</link><generator>RSS for Node</generator><lastBuildDate>Tue, 07 Apr 2026 20:50:19 GMT</lastBuildDate><atom:link href="https://www.devopswithritesh.in/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Persistent Storage in AKS using Azure Disks: Deploying MySQL with a WebApp via LoadBalancer]]></title><description><![CDATA[In this article, we’ll be focusing on utilizing Azure Disks for Kubernetes and deploying Stateful applications in an AKS cluster, along with services and their utilization.
Introduction
When it comes to deploying stateful applications like databases ...]]></description><link>https://www.devopswithritesh.in/persistent-storage-in-aks-using-azure-disks-deploying-mysql-with-a-webapp-via-loadbalancer</link><guid isPermaLink="true">https://www.devopswithritesh.in/persistent-storage-in-aks-using-azure-disks-deploying-mysql-with-a-webapp-via-loadbalancer</guid><category><![CDATA[#Azure #AKS #Kubernetes #DevOps #MySQL #CloudNative #Containers #AzureKubernetesService]]></category><category><![CDATA[Azure]]></category><category><![CDATA[aks]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><category><![CDATA[cloud native]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Tue, 18 Nov 2025 05:18:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763442884044/1afe905a-67d6-4cbc-a121-6f769310ec8f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, we’ll be focusing on utilizing Azure Disks for Kubernetes and deploying Stateful applications in an AKS cluster, along with services and their utilization.</p>
<h1 id="heading-introduction">Introduction</h1>
<p>When it comes to deploying <strong>stateful applications</strong> like databases in Kubernetes, the primary focus shifts to <strong>data persistence</strong> and ensuring consistency as pods scale up or down. This is where <strong>Persistent Volumes (PVs)</strong> come into play allowing data to persist beyond the ephemeral lifecycle of pods.</p>
<p>In <strong>Azure Kubernetes Service (AKS)</strong>, this persistence is powered by <strong>Azure Disks</strong>, a managed storage solution that offers high durability, reliability, and availability with an SLA of over 99%. Azure Disks simplify storage management while ensuring that your critical application data remains safe and accessible.</p>
<p>In this article, we’ll explore how AKS leverages persistent volumes by deploying a <strong>MySQL database</strong> along with a <strong>web application</strong>. We’ll also dive into how <strong>internal and user-facing networking</strong> components work together to make the application fully functional within a Kubernetes environment.</p>
<h1 id="heading-storage-class-in-aks">Storage Class in AKS</h1>
<p>In Kubernetes, a Storage Class defines how persistent storage, such as disks or volume is provisioned dynamically for your pods. It works as a blueprint for storage that tells Kubernetes what type of storage to create when the app asks for it. A <strong>StorageClass</strong> automates the creation of <strong>PersistentVolumes (PVs)</strong> so you don’t need to manually create PVs each time an app needs storage.</p>
<p>When your app says:</p>
<blockquote>
<p>“I need some space to save my data!”</p>
</blockquote>
<p>Kubernetes looks at the <strong>StorageClass</strong> to decide:</p>
<ul>
<li><p>What kind of disk to use (fast or cheap),</p>
</li>
<li><p>where to create it (Azure, AWS, etc.), and</p>
</li>
<li><p>How to manage it (Whether delete it post usage or retain it post the pod is deleted or storage is not in use).</p>
</li>
</ul>
<p>So, think of it as a <strong>"blueprint" for storage</strong> that tells Kubernetes <em>what type of storage to create and how</em> when a PersistentVolumeClaim (PVC) requests it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762611712550/9fa46598-de22-4c93-9d5d-c8b7e68359d0.png" alt class="image--center mx-auto" /></p>
<p>When we create AKS cluster, Azure automatically provisions a few default storage classes as shown below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762612228685/0e1b9ca0-f342-4930-a7c9-37b55c71810c.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><code>default (default)</code> → Uses <strong>Azure Managed Disk</strong> (Standard or Premium)</p>
</li>
<li><p><code>managed</code>, <code>managed-premium</code> → Azure Disk with different tiers</p>
</li>
<li><p><code>azurefile</code>, <code>azurefile-premium</code> → Azure File shares for persistent network storage</p>
</li>
</ul>
<p>These are <strong>predefined by AKS</strong> to save you the effort of creating them manually.<br />So, you can start provisioning volumes immediately without writing a <code>StorageClass</code> manifest.</p>
<h2 id="heading-custom-storage-class">Custom Storage Class</h2>
<p>We only write a custom <strong>StorageClass</strong> when we need <strong>something different</strong> from the defaults.</p>
<p>For example:</p>
<ul>
<li><p>You want a <strong>different disk type</strong> (<code>StandardSSD_LRS</code> instead of <code>Premium_LRS</code>)</p>
</li>
<li><p>You need a <strong>specific reclaim policy</strong> (e.g., <code>Retain</code> instead of <code>Delete</code>)</p>
</li>
<li><p>You need <strong>custom parameters</strong> like encryption type, zone, or SKU</p>
</li>
<li><p>You want to <strong>control binding behavior</strong> (<code>Immediate</code> vs <code>WaitForFirstConsumer</code>)</p>
</li>
<li><p>You’re working in <strong>multi-cloud or hybrid</strong> setups where the default provisioner doesn’t apply</p>
</li>
</ul>
<h2 id="heading-storage-class-vs-persistent-volume-vs-persistent-volume-claim">Storage Class vs Persistent Volume vs Persistent Volume Claim</h2>
<h3 id="heading-1-storageclass-the-blueprint">1. StorageClass: The <em>Blueprint</em></h3>
<p>Think of it as a <strong>recipe</strong> or <strong>menu option</strong> for how storage should be created.<br />It defines <strong>what kind of storage</strong> (SSD/HDD), <strong>where</strong> (Azure Disk, Azure File), and <strong>rules</strong> (delete or keep after use).</p>
<p><strong>Example:</strong></p>
<blockquote>
<p>"Whenever someone asks for storage, give them a 10 GB Premium SSD disk from Azure."</p>
</blockquote>
<h3 id="heading-2-persistentvolume-pv-the-actual-disk">2. PersistentVolume (PV): The <em>Actual Disk</em></h3>
<p>This is the <strong>real storage resource</strong> created in your cluster — a piece of Azure Disk or File share.<br />You can create it manually or Kubernetes can make it automatically using the StorageClass.</p>
<p><strong>Example:</strong></p>
<blockquote>
<p>"Here’s a 10 GB Premium SSD disk created in Azure — ready to use!"</p>
</blockquote>
<h3 id="heading-3-persistentvolumeclaim-pvc-the-request">3. PersistentVolumeClaim (PVC): The <em>Request</em></h3>
<p>This is what your app (Pod) creates when it <strong>needs storage</strong>.<br />It says:</p>
<blockquote>
<p>"I need 10 GB of storage from a Premium disk!"</p>
</blockquote>
<p>Kubernetes then looks for a matching PV or uses the StorageClass to <strong>create one</strong> automatically.</p>
<pre><code class="lang-json">PVC  →  asks for storage
        ↓
StorageClass  →  tells Kubernetes what kind of storage to create
        ↓
PV  →  actual storage created (disk or file share)
</code></pre>
<h2 id="heading-understanding-storage-class-attributes">Understanding Storage Class Attributes</h2>
<p>There are five important attributes when it comes to storage class that are necessary to understand while creating a <strong>custom storage class</strong>.</p>
<ol>
<li><p><strong>NAME</strong></p>
<p> This is the <strong>name of the StorageClass</strong>. We use this name in <strong>PersistentVolumeClaim (PVC)</strong> to tell Kubernetes <strong>which kind of storage you want</strong>.</p>
</li>
<li><p><strong>PROVISIONER</strong></p>
<p> It defines which system or plugin is responsible for creating the storage. In our case, it will be Azure(disk.csi.azure.com and file.csi.azure.com).</p>
</li>
<li><p><strong>RECLAIMPOLICY</strong></p>
<p> This tells Kubernetes <strong>what to do with the volume after the PVC is deleted</strong> (i.e., when your app no longer needs it).</p>
<p> | Policy | Meaning |
 | --- | --- |
 | <code>Delete</code> <strong>(Default in AKS)</strong> | Deletes the underlying disk or file share automatically. |
 | <code>Retain</code> | Keeps the disk even after PVC is deleted (you can reuse or inspect it). |</p>
</li>
<li><p><strong>VOLUMEBINDINGMODE</strong></p>
<p> This controls <strong>when</strong> and <strong>where</strong> the volume is created and bound. In AKS, we mostly use <code>WaitForFirstConsumer</code></p>
<p> | Mode | Meaning |
 | --- | --- |
 | <code>Immediate</code> | Volume is created <strong>as soon as the PVC is created</strong>, regardless of where the Pod runs. |
 | <code>WaitForFirstConsumer</code> | Volume is created <strong>only when a Pod is scheduled</strong> — ensures the disk is created in the same zone/node where the Pod will run (avoids mismatch issues). |</p>
</li>
<li><p><strong>ALLOWVOLUMEEXPANSION</strong></p>
<p> This indicates <strong>whether you can increase the size</strong> of the volume later by simply editing the PVC.</p>
<p> | Setting | Meaning |
 | --- | --- |
 | <code>true</code> (<strong>Default in AKS)</strong> | You can increase storage size (resize volume). |
 | <code>false</code> | Size is fixed — cannot be expanded. |</p>
</li>
</ol>
<h2 id="heading-writing-storage-class-manifest-for-custom-storage-class"><strong>Writing Storage Class Manifest for Custom Storage Class</strong></h2>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">storage.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">StorageClass</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">managed-premium-retain-sc</span>
<span class="hljs-attr">provisioner:</span> <span class="hljs-string">kubernetes.io/azure-disk</span>
<span class="hljs-attr">reclaimPolicy:</span> <span class="hljs-string">Retain</span>
<span class="hljs-attr">volumeBindingMode:</span> <span class="hljs-string">WaitForFirstConsumer</span>   <span class="hljs-comment"># it will wait for a pod(MySQL pod in our case) to be scheduled before binding the PV</span>
<span class="hljs-attr">allowVolumeExpansion:</span> <span class="hljs-literal">true</span>
<span class="hljs-attr">parameters:</span>
  <span class="hljs-attr">skuname:</span> <span class="hljs-string">Premium_LRS</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">managed</span>

<span class="hljs-comment"># There is no spec attribute defined in this StorageClass resource.</span>
<span class="hljs-comment"># This StorageClass uses Azure Premium SSD managed disks with a Retain reclaim policy.</span>
<span class="hljs-comment"># The volume binding mode is set to WaitForFirstConsumer to optimize scheduling.</span>
<span class="hljs-comment"># Volume expansion is enabled to allow resizing of persistent volumes.</span>
</code></pre>
<p>Post applying the above Storage Class manifest, you can see the custom Storage Class has been created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762664721450/50ba6f34-7b81-4e38-a7f6-f31ef2e02b89.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-understanding-pvcpersistent-volume-claim">Understanding PVC(Persistent Volume Claim)</h1>
<p>A <strong>Persistent Volume Claim (PVC)</strong> is essentially a <strong>request for storage</strong> made by your application pod. As the name suggests, it acts as a <strong><em>“claim”</em></strong> to Kubernetes for a specific amount and type of storage rather than creating or managing the storage directly.</p>
<p>Once this claim is raised, <strong>Kubernetes checks the associated Storage Class</strong> to understand how the storage should be provisioned. It then <strong>creates a Persistent Volume (PV)</strong> that fulfills the claim and <strong>binds it to the PVC</strong>, allowing the pod to mount and use the storage as needed.</p>
<blockquote>
<p><code>Application Pod → Persistent Volume Claim (PVC) → Persistent Volume (PV) → Storage Class → Cloud Storage (Azure Disk)</code></p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762662632695/9c7ce77e-9675-4913-b4c5-02d5117dc2f5.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-access-modes-in-pvc">Access Modes in PVC</h2>
<p>Access Modes define how a pod can <strong>access (read and write)</strong> data on a <strong>Persistent Volume (PV)</strong>. It is like permissions regarding whether one Pod or multiple Pods can use it, and how. There are three access modes as follows</p>
<h3 id="heading-readwriteonce-rwo">ReadWriteOnce - RWO</h3>
<p>Only <strong>one Pod</strong> can mount the volume for <strong>read &amp; write</strong> at a time (on one node).</p>
<p>Appropriate for databases like MySQL, MongoDB, and PostgreSQL.</p>
<p><strong><em>Azure Disk</em></strong> is suitable for <strong>ReadWriteOnce</strong> since <strong><em>each disk can be mounted by one node at a time.</em></strong></p>
<h3 id="heading-readonlymany-rox">ReadOnlyMany - ROX</h3>
<p>Volume can be <strong>read by many Pods</strong>, but <strong>no one can write</strong>.</p>
<p>Appropriate for Shared configs, static data, and logs.</p>
<h3 id="heading-readwritemany-rwx">ReadWriteMany - RWX</h3>
<p>Volume can be <strong>read and written by many Pods at once</strong> (even across nodes).</p>
<p>Appropriate for shared file storage, web apps needing shared uploads.</p>
<p><strong><em>Azure File</em></strong> <em>i</em>s suitable for <strong>ReadWriteMany</strong> since <em>a File share can be mounted by multiple Pods/nodes.</em> So, if you need multiple Pods to access the same data, <strong>use Azure File</strong>, <strong>not Azure Disk.</strong></p>
<h3 id="heading-summary">Summary</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Scenario</td><td>Recommended Mode</td></tr>
</thead>
<tbody>
<tr>
<td>Single database Pod that writes data</td><td><code>ReadWriteOnce</code></td></tr>
<tr>
<td>Multiple Pods reading shared data</td><td><code>ReadOnlyMany</code></td></tr>
<tr>
<td>Multiple Pods needing shared read-write storage (like WordPress media uploads)</td><td><code>ReadWriteMany</code></td></tr>
</tbody>
</table>
</div><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762664184917/0afe41fd-1024-4984-9e2a-7981484d4700.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-writing-pvcpersistent-volume-claim-manifest">Writing PVC(Persistent Volume Claim) Manifest</h2>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">PersistentVolumeClaim</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">azure-managed-disks-pvc</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">accessModes:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">ReadWriteOnce</span>
  <span class="hljs-attr">storageClassName:</span> <span class="hljs-string">managed-premium-retain-sc</span>
  <span class="hljs-attr">resources:</span>
    <span class="hljs-attr">requests:</span>
      <span class="hljs-attr">storage:</span> <span class="hljs-string">5Gi</span> <span class="hljs-comment"># Requesting 5 GiB of storage</span>
<span class="hljs-comment"># This PersistentVolumeClaim requests a 5 GiB volume using the 'managed-premium-retain-sc' StorageClass.</span>
</code></pre>
<p>After applying the above PVC, the PVC has been created as shown below; however, it is in the <strong>pending state</strong> because in the storage class manifest, <code>volumeBindingMode</code> is set to <code>waitForFirstConsumer</code> . In this mode, the actual volume provisioning is <strong>delayed</strong> <strong><em>until a pod that uses this PVC is scheduled</em></strong>, ensuring that the <strong><em>storage is created in the same zone as the consuming pod</em></strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762665167357/68e66a49-f525-4fe0-a9fb-852694dabb6f.png" alt class="image--center mx-auto" /></p>
<p>If the <code>volumeBindingMode</code> had it been set to <strong>Immediate</strong>, the requested <strong>5 GB of storage</strong> would have been <strong>provisioned right away</strong>, regardless of whether any pod was scheduled or not.</p>
<h1 id="heading-understanding-configmap-and-its-usecase">Understanding ConfigMap and its Usecase</h1>
<p>ConfgMap is a Kubernetes object that lets you store non-secret configuration data like environment variables, config files, or scripts separately from your application code. In short, it allows you to externalize your non-secretive data so your containers remain clean and reusable.</p>
<p>In the context of deploying a stateful application, i.e., MySQL, in our project, we are going to use a <strong>ConfigMap to store the</strong> <code>.sql</code> <strong>script inside a ConfigMap</strong> that will be used for creating a basic schema when MySQL is deployed in a pod.</p>
<h2 id="heading-writing-configmap-manifest">Writing ConfigMap Manifest</h2>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ConfigMap</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">usermanagement-dbcreation-script</span>
<span class="hljs-attr">data:</span>
  <span class="hljs-attr">mysql_usermgmt_db_init_script.sql:</span> <span class="hljs-string">|
    DROP DATABASE IF EXISTS webappdb;
    CREATE DATABASE webappdb;</span>
</code></pre>
<p>This ConfigMap stores a <strong>SQL initialization script</strong> that will be used by your <strong>MySQL Pod</strong> to create the database <code>webappdb</code>.</p>
<p>It first <strong>drops</strong> any existing database with that name (to start clean) and then <strong>creates</strong> it again.</p>
<h2 id="heading-usage-of-configmap">Usage of ConfigMap</h2>
<p>ConfigMaps can be used in <strong>two</strong> main ways</p>
<ol>
<li><p><strong>As an Environment Variable</strong></p>
<p> Inject the data directly into the container environment.</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">envFrom:</span>
   <span class="hljs-bullet">-</span> <span class="hljs-attr">configMapRef:</span>
       <span class="hljs-attr">name:</span> <span class="hljs-string">usermanagement-dbcreation-script</span>
</code></pre>
</li>
</ol>
<p><strong>As Mounted Volume</strong></p>
<p>Mount the ConfigMap as a file or directory inside the container.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">volumeMounts:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">config-volume</span>
    <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/docker-entrypoint-initdb.d</span>
<span class="hljs-attr">volumes:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">config-volume</span>
    <span class="hljs-attr">configMap:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">usermanagement-dbcreation-script</span>
</code></pre>
<h2 id="heading-overall-execution-flow">Overall Execution Flow</h2>
<ol>
<li><p>You created a <strong>PVC</strong> → requests storage for MySQL data.</p>
</li>
<li><p>You create a <strong>ConfigMap</strong> → holds your SQL schema file.</p>
</li>
<li><p>You’ll create a <strong>MySQL Deployment</strong> →</p>
<ul>
<li><p>mounts the <strong>PVC</strong> for persistent data</p>
</li>
<li><p>mounts the <strong>ConfigMap</strong> for initial schema setup</p>
</li>
</ul>
</li>
</ol>
<p>When the Pod starts, MySQL runs the <code>.sql</code> file and creates your database automatically</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762913451517/38fd198e-2efb-4d0f-998d-90f0606fd0c7.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-understanding-mysql-deployment">Understanding MySQL Deployment</h1>
<p>Deployment is a top-level Kubernetes controller that manages <strong>Pods</strong> and instructs Kubernetes <strong>on what to run, the number of replicas</strong>, and <strong>how to maintain their health</strong>.</p>
<p>A Deployment ensures your app (Pod) is always running the desired number of instances, with the correct configuration.</p>
<h2 id="heading-deployment-in-the-context-of-mysqla-stateful-application">Deployment in the context of MySQL(<em>a Stateful application</em>)</h2>
<p>For a <strong>MySQL database</strong>, the Deployment does these things:</p>
<ol>
<li><p><strong>Runs a MySQL container</strong></p>
</li>
<li><p><strong>Mounts storage (PVC)</strong> to persist data</p>
</li>
<li><p><strong>Mounts ConfigMap</strong> (your <code>.sql</code> file) to initialize schema</p>
</li>
<li><p><strong>Defines environment variables</strong> like <code>MYSQL_ROOT_PASSWORD</code>, <code>MYSQL_DATABASE</code></p>
</li>
<li><p>Ensures MySQL restarts automatically if the Pod crashes</p>
</li>
</ol>
<h2 id="heading-writing-deployment-manifest-for-mysql">Writing Deployment Manifest for MySQL</h2>
<p>When this Deployment runs, Kubernetes will:</p>
<ol>
<li><p>Create a Pod with MySQL 5.6</p>
</li>
<li><p>Mount your Azure-managed disk (PVC)</p>
</li>
<li><p>Run the <code>.sql</code> script from the ConfigMap to create <code>webappdb</code></p>
</li>
<li><p>Keep the Pod alive and recreate it if it crashes, while preserving data</p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-deployment</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">mysql</span>
  <span class="hljs-attr">strategy:</span>       <span class="hljs-comment"># strategy for deployment, can be RollingUpdate or Recreate. Since it is a database, we use Recreate to avoid conflicts.</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">Recreate</span>
  <span class="hljs-attr">template:</span>     <span class="hljs-comment"># pod template</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">mysql</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-container</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">mysql:5.6</span>
          <span class="hljs-attr">env:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYSQL_ROOT_PASSWORD</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">dbpass123!</span>
          <span class="hljs-attr">ports:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">3306</span>
              <span class="hljs-attr">name:</span> <span class="hljs-string">mysql</span>
          <span class="hljs-attr">volumeMounts:</span>       <span class="hljs-comment"># Mounting volumes to the container</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-persistent-storage</span>
              <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/var/lib/mysql</span>
              <span class="hljs-attr">subPath:</span> <span class="hljs-string">mysql-data</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">usermanagement-dbcreation-script</span>
              <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/docker-entrypoint-initdb.d</span>  <span class="hljs-comment"># Mounting the ConfigMap to initialize the database. MySQL will execute any .sql files in this directory on startup.</span>

      <span class="hljs-attr">volumes:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-persistent-storage</span>      <span class="hljs-comment"># Mounting the PersistentVolumeClaim as a volume</span>
          <span class="hljs-attr">persistentVolumeClaim:</span>
            <span class="hljs-attr">claimName:</span> <span class="hljs-string">azure-managed-disks-pvc</span>

        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">usermanagement-dbcreation-script</span>        <span class="hljs-comment"># Mounting the ConfigMap as a volume</span>
          <span class="hljs-attr">configMap:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">usermanagement-dbcreation-script</span>
</code></pre>
<p>The diagram below depicts a holistic view of how everything is put together in the Deployment:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763000150440/cbda1972-c7a6-496f-8095-40e1ac1e3749.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-understanding-the-role-of-clusterip-amp-headless-service">Understanding the Role of ClusterIP &amp; Headless Service</h1>
<p>A <strong>Service</strong> in Kubernetes is a resource that is responsible for exposing pods to the network, both internally and externally.</p>
<p>Exposing a pod’s network internally means allowing the pod to communicate inside the Cluster with other resources, whereas exposing externally means allowing the pod to be accessible to the public internet.</p>
<p>Since pods are ephemeral (can restart, move, or change IP) service provides a stable endpoint(DNS name or IP) for accessing them.</p>
<blockquote>
<p>Think of it as a <strong>constant address</strong> that points to your Pod, even if the Pod itself keeps changing. !</p>
</blockquote>
<p>A <strong>ClusterIP</strong> is the <strong>default Service type</strong> in Kubernetes. It exposes your application <strong>inside the cluster</strong> using a <strong>stable internal IP address</strong>. Whenever Pods are recreated (and their IPs change), the <strong>ClusterIP remains the same</strong>, allowing other Pods or services to connect to it reliably.</p>
<p>If your web application runs multiple Pods, a ClusterIP service ensures <strong>load-balanced internal access</strong>; any backend Pod can talk to it consistently, even if Pods restart or change IPs.</p>
<h2 id="heading-headless-service-amp-its-importance-for-stateful-applicationmysql">Headless Service &amp; Its Importance for Stateful Application(MySQL)</h2>
<p>A <strong>Headless Service</strong> is a special type of Service that <strong>does not have its own Cluster IP</strong>. Which means Kubernetes <strong>does not assign</strong> a virtual IP to the service; instead, <strong>DNS resolves directly</strong> to the <strong>Pod’s IP</strong>. So the application (like your web app) connects <strong>directly to the Pod</strong>, not through a proxy or virtual IP.</p>
<p>Headless service is required for MySQL because:</p>
<ul>
<li><p>We have <strong>only one MySQL Pod</strong>.</p>
</li>
<li><p>We want your web application to connect <strong>directly to that Pod’s IP</strong>.</p>
</li>
<li><p>It ensures <strong>lower latency</strong> and avoids <strong>load-balancing overhead</strong> (unnecessary for a single database instance).</p>
</li>
<li><p>Even if you had multiple database replicas (like in a StatefulSet), a headless service would also help clients resolve to <strong>individual Pod IPs</strong> (e.g., <code>mysql-0</code>, <code>mysql-1</code>).</p>
</li>
</ul>
<h2 id="heading-writing-service-manifest-for-mysql">Writing Service Manifest for MySQL</h2>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-svc</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">mysql</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">3306</span>
  <span class="hljs-attr">clusterIP:</span> <span class="hljs-string">None</span>     <span class="hljs-comment">#Pod IP will be used instead of assigning a clusterIP(this is called headless service)</span>
</code></pre>
<p>This manifest creates a <strong>Headless Service</strong> named <code>mysql-svc</code> for your MySQL Pod.<br />Instead of assigning a virtual ClusterIP, it allows other Pods (like your web application) to connect <strong>directly to the Pod’s IP</strong>.</p>
<h2 id="heading-summary-1">Summary</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Term</td><td>Meaning</td></tr>
</thead>
<tbody>
<tr>
<td><strong>ClusterIP Service</strong></td><td>Has a stable virtual IP (used for load-balanced access)</td></tr>
<tr>
<td><strong>Headless Service (</strong><code>clusterIP: None</code>)</td><td>No virtual IP — connects directly to Pod(s)</td></tr>
<tr>
<td><strong>Why Headless for MySQL?</strong></td><td>Because there’s only one Pod, and we want direct, stable access to its IP</td></tr>
</tbody>
</table>
</div><h1 id="heading-output">Output</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763133569693/c5738cb5-dd90-4872-854b-8f5a888aa9f6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763133972063/d3355bde-41f3-44ef-8141-8e8ae41c5091.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763135890256/a9906514-9078-4460-8127-6755fb4faa5d.png" alt class="image--center mx-auto" /></p>
<p>This confirms that the <strong>StorageClass → PVC → PV → Pod</strong> chain is working flawlessly.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Component</td><td>Status</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><strong>StorageClass</strong></td><td>✅ Created</td><td>Defines disk type and policy (<code>Retain</code>)</td></tr>
<tr>
<td><strong>PVC</strong></td><td>✅ Bound</td><td>Requested and got a 5Gi Azure Disk</td></tr>
<tr>
<td><strong>PV</strong></td><td>✅ Bound</td><td>Actual 5Gi disk created in Azure</td></tr>
<tr>
<td><strong>Deployment (Pod)</strong></td><td>✅ Running</td><td>MySQL Pod running and attached to PVC</td></tr>
<tr>
<td><strong>Service</strong></td><td>✅ Active</td><td>Headless service exposes MySQL directly via Pod IP</td></tr>
</tbody>
</table>
</div><h1 id="heading-database-setup-conclusion">Database Setup Conclusion</h1>
<p>With all the manifests applied, our MySQL database is now fully deployed and functional inside AKS. The <strong>StorageClass</strong> dynamically provisioned an Azure-managed disk, which our <strong>PersistentVolumeClaim (PVC)</strong> successfully bound to ensure data persistence. The <strong>ConfigMap</strong> initialized the database automatically with our predefined schema, while the <strong>Deployment</strong> maintained the MySQL Pod lifecycle with persistent storage attached. We then exposed the Pod using a <strong>Headless Service</strong>, allowing other Pods (like our web application) to connect directly using the Pod IP instead of a ClusterIP. Finally, by accessing the MySQL Pod through a client, we confirmed that the <code>webappdb</code> schema was created successfully, validating that our entire configuration chain, from storage provisioning to application-level database initialization, works seamlessly within the AKS environment.</p>
<h1 id="heading-deploying-webapp">Deploying WebApp</h1>
<p>Deploy a <strong>User Management Web App</strong> that connects to your existing <strong>MySQL database (</strong><code>webappdb</code>).<br />This app will act as the frontend interface and provide APIs for:</p>
<ul>
<li><p>Creating users</p>
</li>
<li><p>Listing users</p>
</li>
<li><p>Deleting users</p>
</li>
</ul>
<h2 id="heading-key-kubernetes-concepts">Key Kubernetes Concepts</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Kubernetes Concept</td><td>Purpose</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Deployment</strong></td><td>Runs the User Management Web App Pod(s)</td></tr>
<tr>
<td><strong>Environment Variables</strong></td><td>Provides database connection details (DB host, DB name, DB user, password)</td></tr>
<tr>
<td><strong>Init Containers</strong></td><td>Optional — can be used to verify MySQL readiness before app starts</td></tr>
<tr>
<td><strong>Service (LoadBalancer)</strong></td><td>Exposes the web app externally via an Azure Load Balancer</td></tr>
</tbody>
</table>
</div><h2 id="heading-understanding-the-deployment-manifest-for-webapp">Understanding the Deployment Manifest for WebApp</h2>
<p>This Deployment creates and manages the <strong>User Management Web Application</strong>, which serves as the frontend for interacting with the MySQL database hosted within the AKS cluster. It exposes APIs to perform operations such as creating, listing, and deleting users from the <code>webappdb</code> schema in MySQL.</p>
<pre><code class="lang-bash">WebApp (LoadBalancer Service) → WebApp Pod → MySQL (Headless Service) → Azure Disk (Persistent Storage)
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763438730313/888659a3-16d5-454c-83c1-55fade170c8f.png" alt class="image--center mx-auto" /></p>
<p>At a high level, this Deployment performs three main functions: <strong>database readiness check</strong>, <strong>application startup</strong>, and <strong>environment configuration</strong>.</p>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">usermgmt-webpp-deployment</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">usermgmt-webapp</span>

<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">usermgmt-webapp</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">usermgmt-webapp</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">initContainers:</span>       <span class="hljs-comment"># this init container ensures that database is up and running before this webapp pod is deployed</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">init-db</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">busybox:1.31</span>
          <span class="hljs-attr">command:</span> [<span class="hljs-string">'sh'</span>, <span class="hljs-string">'-c'</span>, <span class="hljs-string">'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e "  &gt;&gt; MySQL DB Server has started";'</span>]

      <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">usermgmt-webapp-container</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB</span>
          <span class="hljs-attr">imagePullPolicy:</span> <span class="hljs-string">Always</span>
          <span class="hljs-attr">ports:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">8080</span>
          <span class="hljs-attr">env:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DB_HOSTNAME</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">"mysql"</span>      <span class="hljs-comment"># Hostname should be same as the name specified in MySQL cluster IP service metadata.name</span>

            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DB_PORT</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">"3306"</span>

            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DB_NAME</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">"webappdb"</span>

            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DB_USERNAME</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">"root"</span>

            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DB_PASSWORD</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">dbpass123!</span>
</code></pre>
<h3 id="heading-understanding-init-container">Understanding Init Container</h3>
<p>Before the main web application container starts, an <strong>Init Container</strong> named <code>init-db</code> runs a simple shell command that continuously checks if the MySQL service is reachable on port <code>3306</code>.<br />It uses a lightweight <strong>BusyBox</strong> image with the <code>nc</code> (netcat) utility to poll the MySQL service.</p>
<p>This ensures that:</p>
<ul>
<li><p>The <strong>web app Pod</strong> will only start <strong>after the MySQL Pod is fully up and accepting connections</strong>.</p>
</li>
<li><p>Application startup failures due to database unavailability are prevented.</p>
</li>
</ul>
<p>Below is the extracted init container part from the main manifest written above:</p>
<pre><code class="lang-yaml">      <span class="hljs-attr">initContainers:</span>       <span class="hljs-comment"># this init container ensures that database is up and running before this webapp pod is deployed</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">init-db</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">busybox:1.31</span>
          <span class="hljs-attr">command:</span> [<span class="hljs-string">'sh'</span>, <span class="hljs-string">'-c'</span>, <span class="hljs-string">'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e "  &gt;&gt; MySQL DB Server has started";'</span>]
</code></pre>
<h3 id="heading-understanding-environment-variables">Understanding Environment Variables</h3>
<p>The environment variables section passes database connection details directly into the container:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Variable</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><strong>DB_HOSTNAME</strong></td><td>Hostname of the MySQL service (should match the Service name)</td></tr>
<tr>
<td><strong>DB_PORT</strong></td><td>Database port (<code>3306</code>)</td></tr>
<tr>
<td><strong>DB_NAME</strong></td><td>Schema name (<code>webappdb</code>)</td></tr>
<tr>
<td><strong>DB_USERNAME</strong></td><td>Database username (<code>root</code>)</td></tr>
<tr>
<td><strong>DB_PASSWORD</strong></td><td>Database password</td></tr>
</tbody>
</table>
</div><h1 id="heading-loadbalancer-service-for-webapp">Loadbalancer Service for WebApp</h1>
<p>This Kubernetes Service exposes the <strong>User Management Web Application</strong> externally so users can access it through a public IP assigned by Azure. Since the application serves as a frontend/API layer, it needs to be reachable from outside of the cluster, and for that, AKS provides the <strong>LoadBalancer Service type</strong>.</p>
<p>At a high level, this Service performs two core functions:<br /><strong>(1) Exposes the web application to the internet</strong>, and<br /><strong>(2) Routes traffic to the correct Pod inside the cluster</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763439633862/c487aa53-9b53-4da2-829c-e18baef5b5e4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-writing-the-loadbalancer-service-manifest">Writing the LoadBalancer Service Manifest</h2>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">usermgmt-LB-service</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">usermgmt-webapp</span>

<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">LoadBalancer</span>
  <span class="hljs-attr">selector:</span>     <span class="hljs-comment"># to which pod it needs to send the trafic</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">usermgmt-webapp</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">8080</span>
</code></pre>
<h2 id="heading-output-1">Output</h2>
<p>Once all manifests were applied, both the <strong>MySQL Pod</strong> and the <strong>User Management WebApp Pod</strong> were successfully created and transitioned into the <em>Running</em> state. The <strong>LoadBalancer Service</strong> also provisioned an external public IP, allowing us to access the application directly from the browser.</p>
<p>Below is the command-line output showing:</p>
<ul>
<li><p>The <strong>MySQL Pod</strong> is up and running</p>
</li>
<li><p>The <strong>WebApp Pod</strong> is in Running/Ready state</p>
</li>
<li><p>The <strong>LoadBalancer Service</strong> with an external IP assigned</p>
</li>
</ul>
<p>This confirms that all Kubernetes components have been deployed correctly and the application is fully functional.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763439814415/ce75285e-5899-453b-84e1-e8b8924c272f.png" alt class="image--center mx-auto" /></p>
<p>Following that is the screenshot of the <strong>User Management Web Application</strong>, accessible using the LoadBalancer’s external IP. The UI loading successfully in the browser validates that the frontend is reachable and actively communicating with the backend MySQL database.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763439839150/67088e3b-09ef-4825-a2f6-1ba1fdea07d4.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763442254947/5600d02c-2234-4fe3-8996-bc7feaa7b7c4.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763442276467/df4aeecb-247c-4860-9081-ea841597d52d.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-conclusion-amp-architecture">Conclusion &amp; Architecture</h1>
<p>In this section, we successfully deployed a complete <strong>User Management Web Application</strong> on <strong>Azure Kubernetes Service (AKS)</strong> backed by a fully functional <strong>MySQL database</strong>. Through a combination of Kubernetes core concepts, including <strong>StorageClass</strong>, <strong>PersistentVolumeClaim</strong>, <strong>ConfigMap</strong>, <strong>Deployment</strong>, and <strong>Services,</strong> we built a production-ready setup capable of running a stateful backend and a stateless frontend together.</p>
<p>The MySQL Pod is provisioned with durable Azure Disk storage, initialized automatically using a ConfigMap-based schema, and exposed internally through a Headless Service for direct Pod-to-Pod communication. The WebApp Pod uses an init container to ensure the database is available before startup, and is exposed externally using an Azure LoadBalancer, enabling seamless user access from outside the cluster.</p>
<p>The command-line outputs confirm the successful creation of all Pods and Services, while the working web interface accessed through the LoadBalancer’s external IP validates both the deployment and the backend integration.</p>
<pre><code class="lang-yaml">                         <span class="hljs-string">┌────────────────────────┐</span>
                         <span class="hljs-string">│</span>   <span class="hljs-string">Azure</span> <span class="hljs-string">Load</span> <span class="hljs-string">Balancer</span>  <span class="hljs-string">│</span>
                         <span class="hljs-string">│</span>  <span class="hljs-string">(Public</span> <span class="hljs-string">External</span> <span class="hljs-string">IP)</span>  <span class="hljs-string">│</span>
                         <span class="hljs-string">└─────────────┬──────────┘</span>
                                       <span class="hljs-string">│</span>
                                       <span class="hljs-string">│</span>
                           <span class="hljs-string">Exposes</span> <span class="hljs-string">WebApp</span> <span class="hljs-string">on</span> <span class="hljs-string">Port</span> <span class="hljs-number">80</span>
                                       <span class="hljs-string">│</span>
                                       <span class="hljs-string">▼</span>
                          <span class="hljs-string">┌──────────────────────────┐</span>
                          <span class="hljs-string">│</span>     <span class="hljs-string">WebApp</span> <span class="hljs-string">Deployment</span>     <span class="hljs-string">│</span>
                          <span class="hljs-string">│</span>   <span class="hljs-string">(UserMgmt</span> <span class="hljs-string">WebApp</span> <span class="hljs-string">Pod)</span>   <span class="hljs-string">│</span>
                          <span class="hljs-string">│</span>  <span class="hljs-attr">Container Port :</span> <span class="hljs-number">8080</span>    <span class="hljs-string">│</span>
                          <span class="hljs-string">└───────┬─────────┬────────┘</span>
                                  <span class="hljs-string">│</span>         <span class="hljs-string">│</span>
                                  <span class="hljs-string">│</span>         <span class="hljs-string">│</span>
                                  <span class="hljs-string">│</span>     <span class="hljs-string">Reads</span> <span class="hljs-string">DB</span> <span class="hljs-string">Config</span>
                                  <span class="hljs-string">│</span>         <span class="hljs-string">│</span>
                                  <span class="hljs-string">▼</span>         <span class="hljs-string">▼</span>
                       <span class="hljs-string">┌──────────────────────────┐</span>
                       <span class="hljs-string">│</span>        <span class="hljs-string">ConfigMap</span>          <span class="hljs-string">│</span>
                       <span class="hljs-string">│</span>  <span class="hljs-string">(DB</span> <span class="hljs-string">env</span> <span class="hljs-string">variables</span> <span class="hljs-string">etc.)</span>  <span class="hljs-string">│</span>
                       <span class="hljs-string">└───────────────────────────┘</span>
                                  <span class="hljs-string">│</span>
                                  <span class="hljs-string">│</span> <span class="hljs-string">Headless</span> <span class="hljs-string">Service</span> <span class="hljs-string">(clusterIP=None)</span>
                                  <span class="hljs-string">▼</span>
                         <span class="hljs-string">┌────────────────────────┐</span>
                         <span class="hljs-string">│</span>      <span class="hljs-string">MySQL</span> <span class="hljs-string">Pod</span>         <span class="hljs-string">│</span>
                         <span class="hljs-string">│</span>  <span class="hljs-attr">Port :</span> <span class="hljs-number">3306</span>           <span class="hljs-string">│</span>
                         <span class="hljs-string">│</span>  <span class="hljs-attr">Schema :</span> <span class="hljs-string">webappdb</span>     <span class="hljs-string">│</span>
                         <span class="hljs-string">└───────────┬────────────┘</span>
                                     <span class="hljs-string">│</span>
                                     <span class="hljs-string">│</span> <span class="hljs-string">PVC</span> <span class="hljs-string">Bind</span>
                                     <span class="hljs-string">▼</span>
                         <span class="hljs-string">┌────────────────────────┐</span>
                         <span class="hljs-string">│</span> <span class="hljs-string">Persistent</span> <span class="hljs-string">Volume</span> <span class="hljs-string">(PV)</span> <span class="hljs-string">│</span>
                         <span class="hljs-string">│</span> <span class="hljs-string">Azure</span> <span class="hljs-string">Managed</span> <span class="hljs-string">Disk</span>     <span class="hljs-string">│</span>
                         <span class="hljs-string">└────────────────────────┘</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Implementing Azure AI Vision with Python: OCR, Object Detection, Tagging & Captioning]]></title><description><![CDATA[Introduction
Computer Vision is a branch of Artificial Intelligence that allows machines to see, analyze, and understand images and videos. Azure AI Vision provides pre-built models through simple APIs, enabling developers to integrate features like ...]]></description><link>https://www.devopswithritesh.in/implementing-azure-ai-vision-with-python-ocr-object-detection-tagging-and-captioning</link><guid isPermaLink="true">https://www.devopswithritesh.in/implementing-azure-ai-vision-with-python-ocr-object-detection-tagging-and-captioning</guid><category><![CDATA[azure ai services]]></category><category><![CDATA[Azure]]></category><category><![CDATA[openai]]></category><category><![CDATA[LLM's ]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Tue, 26 Aug 2025 02:41:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756175988971/df3d8f82-a095-4a97-9db4-5922db308e7d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>Computer Vision is a branch of Artificial Intelligence that allows machines to <strong>see, analyze, and understand</strong> images and videos. Azure AI Vision provides pre-built models through simple APIs, enabling developers to integrate features like object detection, OCR, tagging, and caption generation into their applications.</p>
<p>In this blog, we’ll walk through how to implement Azure AI Vision using <strong>Python</strong>, showcasing its major features with practical code examples.</p>
<h1 id="heading-azure-ai-vision-amp-use-cases">Azure AI Vision &amp; Use Cases</h1>
<p>Instead of spending time training deep learning models from scratch, Azure AI Vision provides ready-to-use models that can:</p>
<ul>
<li><p>Detect objects in images</p>
</li>
<li><p>Extract text using <strong>OCR</strong></p>
</li>
<li><p>Automatically generate tags</p>
</li>
<li><p>Create human-like captions describing images</p>
</li>
</ul>
<p>This makes it extremely useful for building real-world AI solutions quickly and at scale.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756175452965/a7951067-716b-4783-8ba6-eae8e7331280.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-how-it-helps-llms">How It Helps LLMs</h2>
<p>When combined with <strong>Large Language Models (LLMs)</strong>, these visual capabilities make AI systems <strong>multimodal</strong>:</p>
<ul>
<li><p>LLMs can reason not just on text, but also on what’s inside an image.</p>
</li>
<li><p>Example: Azure Vision extracts text from an invoice (OCR) → LLM interprets and summarizes it → business system updates automatically.</p>
</li>
</ul>
<h2 id="heading-real-world-use-case">Real-World Use Case</h2>
<p>Imagine a <strong>retail company</strong> managing thousands of product images:</p>
<ul>
<li><p>Azure Vision <strong>tags and captions</strong> products for search and cataloging.</p>
</li>
<li><p>OCR extracts text from product labels.</p>
</li>
<li><p>LLMs use this extracted data to auto-generate product descriptions for e-commerce.</p>
</li>
</ul>
<p>This saves time, improves accuracy, and makes systems more intelligent.</p>
<h1 id="heading-python-implementation">Python Implementation</h1>
<h2 id="heading-1-analyzing-an-image-with-azure-ai-vision-in-python-image-tagging">1- Analyzing an Image with Azure AI Vision in Python (Image Tagging)</h2>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> azure.ai.vision.imageanalysis <span class="hljs-keyword">import</span> ImageAnalysisClient
<span class="hljs-keyword">from</span> azure.ai.vision.imageanalysis.models <span class="hljs-keyword">import</span> VisualFeatures
<span class="hljs-keyword">from</span> azure.core.credentials <span class="hljs-keyword">import</span> AzureKeyCredential
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">from</span> dotenv <span class="hljs-keyword">import</span> load_dotenv

load_dotenv(<span class="hljs-string">"env.env"</span>)

endpoint=<span class="hljs-string">"https://az-computer-vision1.cognitiveservices.azure.com/"</span>
key=os.getenv(<span class="hljs-string">"AI_VISION_KEY"</span>)

client = ImageAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
<span class="hljs-keyword">with</span> open(<span class="hljs-string">"ai_vision_test.jpg"</span>, <span class="hljs-string">"rb"</span>) <span class="hljs-keyword">as</span> image:
    image_details=image.read()

response=client.analyze(
    image_data=image_details,
    visual_features=[VisualFeatures.TAGS, VisualFeatures.CAPTION]

)

print(json.dumps(response.as_dict(), indent=<span class="hljs-number">4</span>))
</code></pre>
<h3 id="heading-code-walkthrough">Code Walkthrough</h3>
<ol>
<li><p><strong>Importing Required Libraries</strong></p>
<ul>
<li><p><a target="_blank" href="http://azure.ai.vision"><code>azure.ai.vision</code></a><code>.imageanalysis</code> provides the <code>ImageAnalysisClient</code> to interact with the Azure AI Vision service.</p>
</li>
<li><p><code>VisualFeatures</code> defines which features (Tags, Captions, OCR, etc.) we want to extract.</p>
</li>
<li><p><code>AzureKeyCredential</code> securely passes our API key to the client.</p>
</li>
<li><p><code>dotenv</code> is used to safely load credentials stored in an <code>.env</code> file.</p>
</li>
</ul>
</li>
<li><p><strong>Setting Up Authentication</strong></p>
<ul>
<li><p>The <strong>endpoint</strong> is the URL of your Azure AI Vision resource.</p>
</li>
<li><p>The <strong>key</strong> is stored in an environment file (<code>env.env</code>) for security, instead of hardcoding it into the script.</p>
</li>
</ul>
</li>
<li><p><strong>Creating the Client</strong></p>
<ul>
<li>The <code>ImageAnalysisClient</code> connects to Azure’s AI Vision service using the endpoint and API key.</li>
</ul>
</li>
<li><p><strong>Reading the Image</strong></p>
<ul>
<li>The image (<code>ai_vision_test.jpg</code>) is read in <strong>binary format</strong> because the API expects raw bytes of the image.</li>
</ul>
</li>
<li><p><strong>Analyzing the Image</strong></p>
<ul>
<li><p>We call the <code>analyze()</code> method and pass two visual features:</p>
<ul>
<li><p><code>VisualFeatures.TAGS</code> → Returns a list of keywords that describe the image.</p>
</li>
<li><p><code>VisualFeatures.CAPTION</code> → Generates a human-readable caption summarizing the image.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Printing Results</strong></p>
<ul>
<li><p>The response is converted into a dictionary and printed in a nicely formatted <strong>JSON output</strong> using <code>json.dumps()</code>.</p>
</li>
<li><p>This makes it easy to see what Azure Vision has detected in the image.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-2-object-detection-with-azure-ai-vision-in-python">2- Object Detection with Azure AI Vision in Python</h2>
<p>Another powerful feature of Azure AI Vision is <strong>object detection</strong>. It not only identifies what objects are present in an image but also returns their <strong>locations using bounding boxes</strong>. This is especially useful in applications like surveillance, manufacturing defect detection, and retail product recognition.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> azure.ai.vision.imageanalysis <span class="hljs-keyword">import</span> ImageAnalysisClient
<span class="hljs-keyword">from</span> azure.ai.vision.imageanalysis.models <span class="hljs-keyword">import</span> VisualFeatures
<span class="hljs-keyword">from</span> azure.core.credentials <span class="hljs-keyword">import</span> AzureKeyCredential
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">from</span> dotenv <span class="hljs-keyword">import</span> load_dotenv

load_dotenv(<span class="hljs-string">"env.env"</span>)

endpoint=<span class="hljs-string">"https://az-computer-vision1.cognitiveservices.azure.com/"</span>
key=os.getenv(<span class="hljs-string">"AI_VISION_KEY"</span>)

client = ImageAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
<span class="hljs-keyword">with</span> open(<span class="hljs-string">"fruit_bucket.png"</span>, <span class="hljs-string">"rb"</span>) <span class="hljs-keyword">as</span> image:
    image_details=image.read()

response=client.analyze(
    image_data=image_details,
    visual_features=[VisualFeatures.OBJECTS]    <span class="hljs-comment"># detects objects in the image with bounding boxes</span>

)

print(json.dumps(response.as_dict(), indent=<span class="hljs-number">4</span>))
</code></pre>
<h3 id="heading-code-walkthrough-1">Code Walkthrough</h3>
<ol>
<li><p><strong>Importing Libraries</strong><br /> We bring in the same core classes (<code>ImageAnalysisClient</code>, <code>VisualFeatures</code>, and <code>AzureKeyCredential</code>) used earlier for tagging and captions.</p>
</li>
<li><p><strong>Authentication</strong></p>
<ul>
<li><p>The <code>endpoint</code> points to your Azure AI Vision resource.</p>
</li>
<li><p>The <code>key</code> is securely loaded from an <code>.env</code> file.</p>
</li>
</ul>
</li>
<li><p><strong>Image Input</strong></p>
<ul>
<li>We load the image <code>fruit_bucket.png</code> in binary format so it can be sent to the API.</li>
</ul>
</li>
<li><p><strong>Object Detection Request</strong></p>
<ul>
<li><p><code>VisualFeatures.OBJECTS</code> tells Azure Vision to detect all visible objects in the image.</p>
</li>
<li><p>The API responds with a list of objects, their confidence scores, and bounding box coordinates.</p>
</li>
</ul>
</li>
<li><p><strong>Readable Output</strong></p>
<ul>
<li><p>The <a target="_blank" href="http://response.as"><code>response.as</code></a><code>_dict()</code> method returns the structured result as a dictionary.</p>
</li>
<li><p><code>json.dumps()</code> formats it neatly so we can see exactly what was detected.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-3-extracting-text-from-images-with-azure-ai-vision-ocr">3- Extracting Text from Images with Azure AI Vision (OCR)</h2>
<p>Azure AI Vision provides powerful <strong>OCR (Optical Character Recognition)</strong> capabilities. With this feature, we can detect both <strong>printed and handwritten text</strong> from images and documents. This is especially useful in scenarios like digitizing scanned documents, extracting text from receipts, or reading quotes from images.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> azure.ai.vision.imageanalysis <span class="hljs-keyword">import</span> ImageAnalysisClient
<span class="hljs-keyword">from</span> azure.ai.vision.imageanalysis.models <span class="hljs-keyword">import</span> VisualFeatures
<span class="hljs-keyword">from</span> azure.core.credentials <span class="hljs-keyword">import</span> AzureKeyCredential
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">from</span> dotenv <span class="hljs-keyword">import</span> load_dotenv

load_dotenv(<span class="hljs-string">"env.env"</span>)

endpoint=<span class="hljs-string">"https://az-computer-vision1.cognitiveservices.azure.com/"</span>
key=os.getenv(<span class="hljs-string">"AI_VISION_KEY"</span>)

client = ImageAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
<span class="hljs-keyword">with</span> open(<span class="hljs-string">"quote.jpg"</span>, <span class="hljs-string">"rb"</span>) <span class="hljs-keyword">as</span> image:
    image_details=image.read()

response=client.analyze(
    image_data=image_details,
    visual_features=[VisualFeatures.READ]    <span class="hljs-comment"># detects text in the image using Optical Character Recognition (OCR)</span>

)

<span class="hljs-keyword">for</span> line <span class="hljs-keyword">in</span> response.read.blocks[<span class="hljs-number">0</span>].lines:
    print(line. Text)
</code></pre>
<h3 id="heading-code-walkthrough-2">Code Walkthrough</h3>
<ol>
<li><p><strong>Authentication</strong></p>
<ul>
<li><p>The endpoint and key are loaded from the <code>.env</code> file to keep secrets secure.</p>
</li>
<li><p><code>ImageAnalysisClient</code> is initialized to interact with Azure AI Vision.</p>
</li>
</ul>
</li>
<li><p><strong>Reading the Image</strong></p>
<ul>
<li>The file <code>quote.jpg</code> is loaded in binary mode before sending it to the API.</li>
</ul>
</li>
<li><p><strong>Performing OCR</strong></p>
<ul>
<li><p>The <a target="_blank" href="http://VisualFeatures.READ"><code>VisualFeatures.READ</code></a> option tells Azure Vision to run OCR.</p>
</li>
<li><p>The response contains detected text blocks, lines, and even word-level details if needed.</p>
</li>
</ul>
</li>
<li><p><strong>Extracting Results</strong></p>
<ul>
<li>We loop through <a target="_blank" href="http://response.read"><code>response.read</code></a><code>.blocks[0].lines</code> and print each line of detected text.</li>
</ul>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Exploring the Role of Retrieval-Augmented Generation (RAG) in Modern AI]]></title><description><![CDATA[What is RAG?
Retrieval-Augmented Generation (RAG) pairs a Large Language Model (LLM) with external knowledge sources—such as databases, blob/object stores, file shares, and document repositories—to ground its responses in real data. Rather than relyi...]]></description><link>https://www.devopswithritesh.in/exploring-the-role-of-retrieval-augmented-generation-rag-in-modern-ai</link><guid isPermaLink="true">https://www.devopswithritesh.in/exploring-the-role-of-retrieval-augmented-generation-rag-in-modern-ai</guid><category><![CDATA[AI]]></category><category><![CDATA[Azure OpenAI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[Azure  AI 102]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Thu, 21 Aug 2025 14:48:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755787059235/90a9f99d-8afc-4b3b-b1d6-5a9ab3e2481e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-rag">What is RAG?</h1>
<p>Retrieval-Augmented Generation (RAG) pairs a Large Language Model (LLM) with external knowledge sources—such as databases, blob/object stores, file shares, and document repositories—to ground its responses in real data. Rather than relying solely on its pretraining, the LLM first retrieves relevant context from these connected sources and then generates an answer that blends that up-to-date evidence with its language capabilities. This approach improves factual accuracy, reduces hallucinations, and lets applications incorporate private or domain-specific knowledge without retraining the model.</p>
<h1 id="heading-sources-for-retrieval-augmented-generation-in-azure">Sources for Retrieval Augmented Generation in Azure</h1>
<p>In Azure, you can connect multiple enterprise data sources to power RAG with current, organization-specific knowledge. Common sources include:</p>
<ul>
<li><p>Azure Blob Storage: documents, PDFs, text files, and other unstructured content</p>
</li>
<li><p>SharePoint: internal company docs, policies, and knowledge-base articles</p>
</li>
<li><p>File shares: content from network drives and on-prem servers • Databases: SQL databases, Azure Cosmos DB, and more</p>
</li>
<li><p>Web and APIs: company intranet, public sites, and custom REST endpoints</p>
</li>
<li><p>OneDrive: Office documents such as Word, Excel, and PowerPoint</p>
</li>
</ul>
<p>These sources can be ingested and indexed in Azure AI Search, which then serves as the retrieval layer for Retrieval-Augmented Generation—grounding your LLM with authoritative, up-to-date enterprise data.</p>
<h1 id="heading-internal-flow-of-rag-with-azure">Internal flow of RAG with Azure</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755785870720/e79dc655-344a-469d-96d3-b34ae99ac9c9.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>User query: Example: “What is our company’s travel reimbursement policy?”</p>
</li>
<li><p>Route to search: The LLM forwards the query to Azure AI Search (formerly Cognitive Search) rather than guessing from pretraining alone.</p>
</li>
<li><p>Retrieve from enterprise index: Azure AI Search scans its index built from enterprise sources (Blob Storage, SharePoint, file shares, databases, websites/APIs, OneDrive).</p>
</li>
<li><p>Vector and semantic retrieval: It fetches the most relevant chunks using vector search and semantic ranking (beyond simple keyword matching).</p>
</li>
<li><p>Ground the model: The retrieved passages are passed back to the LLM as context (grounding data).</p>
</li>
<li><p>Generate the answer: The LLM composes a final, natural-language response that cites and synthesizes the retrieved context with its reasoning.</p>
</li>
</ol>
<p>Result: Answers are accurate, current, and aligned with company policy—while minimizing hallucinations and avoiding retraining.</p>
<h1 id="heading-rag-implementation-in-python-with-azure-services">RAG implementation in Python with Azure services</h1>
<h2 id="heading-what-youll-build">What you’ll build</h2>
<p>A Retrieval-Augmented Generation (RAG) app where an LLM (GPT‑4) answers questions grounded in your enterprise content indexed by Azure AI Search and stored in Azure Blob Storage.</p>
<h2 id="heading-resources-required-on-azure">Resources required on Azure</h2>
<ol>
<li><p>LLM: GPT‑4 deployment in Azure AI Foundry (Azure OpenAI)</p>
</li>
<li><p>Retrieval: Azure AI Search with vector search enabled</p>
</li>
<li><p>Content store: Azure Blob Storage (one container with your documents: PDFs, DOCX, TXT, etc.)</p>
</li>
<li><p>App runtime: Python (requests/openai/azure-openai SDKs) and a .env file for secrets</p>
</li>
</ol>
<h2 id="heading-high-level-architecture">High-level architecture</h2>
<ol>
<li><p>Documents live in Azure Blob Storage.</p>
</li>
<li><p>Azure AI Search ingests and indexes them (including embeddings for vector search).</p>
</li>
<li><p>Your Python app sends a user query to GPT‑4 and passes RAG config that points to the AI Search index.</p>
</li>
<li><p>Azure AI Search returns the most relevant chunks (vector + semantic retrieval).</p>
</li>
<li><p>GPT‑4 generates an answer grounded in those chunks.</p>
</li>
</ol>
<h2 id="heading-setup-checklist">Setup checklist</h2>
<ul>
<li><p>Create a Blob Storage account and container; upload sample docs.</p>
</li>
<li><p>Provision Azure AI Search; enable vector search; create index (schema with content, metadata, vector fields).</p>
</li>
<li><p>Connect the blob container to AI Search (data source + indexer) so content is ingested and chunked.</p>
</li>
<li><p>In Azure AI Foundry, deploy a GPT‑4 model and note the endpoint, API version, and key.</p>
</li>
<li><p>Capture keys/URLs in env.env: AZURE_OPENAI_API_KEY, AZURE_SEARCH_API_KEY, Azure OpenAI endpoint, Azure AI Search endpoint, Index name</p>
</li>
<li><p>In Python, load env.env, initialize the Azure OpenAI client, and attach the AI Search data source via extra_body (as shown in the snippet).</p>
</li>
</ul>
<h1 id="heading-code-snippet">Code Snippet</h1>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">from</span> openai <span class="hljs-keyword">import</span> AzureOpenAI
<span class="hljs-keyword">from</span> dotenv <span class="hljs-keyword">import</span> load_dotenv <span class="hljs-comment"># Load environment variables from .env file</span>

load_dotenv(<span class="hljs-string">"env.env"</span>)  <span class="hljs-comment"># Load environment variables from the specified .env file</span>

subscription_key = os.getenv(<span class="hljs-string">"AZURE_OPENAI_API_KEY"</span>)
client = AzureOpenAI(
    api_version=<span class="hljs-string">"2024-12-01-preview"</span>,
    azure_endpoint=<span class="hljs-string">"https://ai-foundary-demo.cognitiveservices.azure.com/"</span>,
    api_key=subscription_key,
)

rag_parameters = {
    <span class="hljs-string">"data_sources"</span>: [
        {
            <span class="hljs-string">"type"</span>: <span class="hljs-string">"azure_search"</span>,
            <span class="hljs-string">"parameters"</span>: {
                <span class="hljs-string">"endpoint"</span>: <span class="hljs-string">"https://aisearch20251999.search.windows.net"</span>,
                <span class="hljs-string">"index_name"</span>: <span class="hljs-string">"index"</span>,
                <span class="hljs-string">"authentication"</span>: {
                    <span class="hljs-string">"type"</span>: <span class="hljs-string">"api_key"</span>,
                    <span class="hljs-string">"key"</span>: os.getenv(<span class="hljs-string">"AZURE_SEARCH_API_KEY"</span>),
                }

            }
        }
    ],

}

response = client.chat.completions.create(
    messages=[
        {
            <span class="hljs-string">"role"</span>: <span class="hljs-string">"system"</span>,
            <span class="hljs-string">"content"</span>: <span class="hljs-string">"You are a helpful assistant that helps student learn Python Basics."</span>
        },
        {<span class="hljs-string">"role"</span>: <span class="hljs-string">"user"</span>,
         <span class="hljs-string">"content"</span>: <span class="hljs-string">"What is Python?"</span>

         }
    ],
    temperature=<span class="hljs-number">0.7</span>,
    top_p=<span class="hljs-number">0.9</span>,
    model=<span class="hljs-string">"gpt-4.1"</span>,
    extra_body=rag_parameters
)

model_dump_response=response.model_dump()

print(response.choices[<span class="hljs-number">0</span>].message.content)  <span class="hljs-comment"># Print the response in a readable format</span>
</code></pre>
]]></content:encoded></item><item><title><![CDATA[DevOps Interview Questions-2025]]></title><description><![CDATA[Git Questions

How to initialize Git locally?
 Ans: We can initialize git using the git init command**.** Once git is initialized, a .git directory will be created which is responsible for handling all git operations.

What is a Pre-Commit Hook?
 Ans...]]></description><link>https://www.devopswithritesh.in/devops-interview-questions-2025</link><guid isPermaLink="true">https://www.devopswithritesh.in/devops-interview-questions-2025</guid><category><![CDATA[Devops]]></category><category><![CDATA[interview questions]]></category><category><![CDATA[DevOps Interview Questions and Answers]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Mon, 05 May 2025 05:23:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745390599614/df1b15a3-44b1-48c7-b1b9-8b0ca2a147bd.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-git-questions"><em>Git Questions</em></h1>
<ol>
<li><p><strong>How to initialize Git locally?</strong></p>
<p> <strong>Ans</strong>: We can initialize git using the <code>git init</code> command**.** Once git is initialized, a .git directory will be created which is responsible for handling all git operations.</p>
</li>
<li><p><strong>What is a Pre-Commit Hook?</strong></p>
<p> <strong>Ans</strong>: A pre-commit hook is a script  that executes before a commit is made. These hooks are part of <em>git’s native hook framework</em> and are located in the <code>.git/hooks</code> directory.</p>
<p> Pre-commit hooks are extremely helpful in preventing sensitive information like passwords, secrets, access tokens, etc, from being committed to git. When a user runs git commit, the pre-commit is triggered before the commit is finalized, and if the hook fails, then the commit is aborted.</p>
</li>
<li><p><strong>Which command is used to compare the changes in your working directory with the last committed version in the repository?</strong></p>
<p> <strong>Ans</strong>: <code>git diff</code> command is used to compare changes in your working directory with the last committed version in the repository. It also allows you to see the difference between two commits.</p>
<p> <em>E.g.:</em> <code>git diff &lt;commit_id1&gt; &lt;commit_id2&gt;</code></p>
</li>
<li><p><strong>What is git fork?</strong></p>
<p> <strong>Ans</strong>: <code>git fork</code> is used to create a copy of the repository to your account, where you can modify it and create your own version of the repository.</p>
</li>
<li><p><strong>Can you explain the usage of git cherry-pick?</strong></p>
<p> <strong>Ans</strong>: git cherry-pick is the command that is used to pick a particular commit and merge it into the main. It is useful when you want to pick up a specific commit only. You have to be on the main branch before cherry-picking.</p>
<p> E.g.: <code>git cherry-pick &lt;commitid&gt;</code></p>
</li>
<li><p><strong>What is the difference between</strong> <code>git merge</code> <strong>and</strong> <code>git rebase</code><strong>?</strong></p>
<p> <strong>Ans</strong>: Both git merge and git rebase serve the same purpose, that is, merging your code from a feature branch to the main branch, but in 2 different ways as follows:</p>
<p> When you use git rebase, you get a linear commit history, but when you use git merge, it creates a commit at the top, disturbing the sequence.</p>
</li>
<li><p><strong>What is the difference between</strong> <code>git pull</code> <strong>and</strong> <code>git fetch</code><strong>?</strong></p>
<p> <strong>Ans</strong>: <code>git fetch</code> command only informs you about the changes that are there on the remote but not available on your local working copy, but it does not bring the changes to your local code base, whereas git pull directly updates your local code base with the new changes from the remote repository.</p>
<p> <code>git pull == git fetch + git merge</code></p>
</li>
<li><p><strong>What are pre-commit hooks and post-commit hooks?</strong></p>
<p> <strong>Ans</strong>: A hook is something that you want to run before and after the occurrence of an action. So, pre-commit hooks are actions taken before you do the commit:</p>
<p> <strong><em>Example</em></strong>*:* when you have a public key, private key, or password that you do not wish to be pushed to git then you can configure a pre-commit hook to execute a script that checks a certain pattern that will scan the code before every commit you make.</p>
<p> Similarly, post-commit hooks are nothing but actions configured to be run post a commit is made:</p>
<p> <strong><em>Example</em></strong>*:* running static analysis tools, linters, or code formatters to ensure committed code adheres to defined quality standards.</p>
</li>
<li><p><strong>What is</strong> <code>git stash</code><strong>, and what are its use cases?</strong></p>
<p> <strong>Ans:</strong> git stash is the command that is used to temporarily save the changes made to your working copy.</p>
<p> <strong><em>Use-Case:</em></strong></p>
<ul>
<li><p><strong><em>Temporarily saving the work:</em></strong> <em>When you need to switch branches to work on something urgent but don’t want to commit the incomplete changes, then you can do</em> <code>git stash</code> <em>—&gt;</em> <code>git checkout another_branch</code> <em>—&gt; work on the other branch —&gt;</em> <code>git checkout original_branch</code> <em>—&gt;</em> <code>git stash pop</code><em>.</em></p>
</li>
<li><p><strong><em>Pulling Updates without losing the work:</em></strong> <em>if you need to pull the latest changes from the remote repository, but you have some uncommitted changes, then you can do</em> <code>git stash</code> <em>—&gt;</em> <code>git pull</code> <em>—&gt;</em> <code>git stash pop</code><em>.</em></p>
</li>
<li><p><strong><em>Resolving merge conflicts:</em></strong> <em>if merge or rebase fails due to uncommitted changes, then you can do</em> <code>git stash</code> <em>—&gt;</em> <code>git merge target_branch</code> <em>—&gt; resolve the conflicts—&gt;</em> <code>git stash pop</code></p>
</li>
</ul>
</li>
<li><p><strong>How to amend a commit in git?</strong></p>
<p><strong>Ans:</strong> When you have made a commit with some mistake and pushed it to the remote repository, and now you want to rectify that particular commit, then you can use <code>git commit --amend,</code> It allows you to modify the most recent commit</p>
</li>
<li><p><strong>How do you revert a commit that has already been pushed and made public?</strong></p>
<p><strong>Ans:</strong> You can undo the changes by making another commit using the command <code>git revert &lt;commit_id&gt;</code></p>
</li>
<li><p><strong>What is the difference between</strong> <code>git stash apply</code> <strong>and</strong> <code>git stash pop</code><strong>?</strong></p>
<p><strong>Ans:</strong> The key difference between git stash pop and git stash apply lies in how they handle the stash after applying the changes in your working directory:</p>
<ul>
<li><p><code>git stash pop</code> Applies the stash and removes it from the stash stack permanently. After running git stash pop, the stashed changes are restored to your working directory and stash id is deleted from the stash stack. It should be used when you are sure that you do not need that stash after applying it because once your working copy is restored, the stash will be removed.</p>
</li>
<li><p><code>git stash apply</code> Applies the stash but keeps it in the stash stack. After running the git stash apply, the stashed changes are restored to your working directory but the stash remains in the stack for future use. It is useful when you want to keep the stash for future use.</p>
</li>
</ul>
</li>
<li><p><strong>How to find a list of files that are changed during a commit?</strong></p>
<p><strong>Ans:</strong> <code>git diff-tree -r &lt;commit_id&gt;</code></p>
</li>
<li><p><strong>How to check whether a branch has already been merged to the master or not?</strong></p>
<p><strong>Ans:</strong> <code>git branch –merged</code> gives the list of merged branches, and <code>git branch –not-merged</code> gives the list of unmerged branches.</p>
</li>
<li><p><strong>How to remove a file from your git without removing it from your local filesystem?</strong></p>
<p><strong>Ans:</strong> using <code>git reset &lt;filename&gt;</code> which is exactly opposite to the git add command.</p>
</li>
<li><p><strong>What is the difference between git revert and git reset?</strong></p>
<p><strong>Ans:</strong> git reset is used to return the entire working tree to the last committed state; it also resets the file from the staging area, whereas the git revert command adds a new history to the project and it does not modify the existing history.</p>
</li>
<li><p><strong>What is</strong> <code>git bisect</code>, <strong>and how do you use it to determine the source of a bug?</strong></p>
<p><strong>Ans:</strong> git bisect is used to find a commit that introduced a bug by using a binary search algorithm. You can use it by giving it a bad commit ID, which you think could be a potential reason for the bug being introduced, and a good commit up to which everything was working fine. Then this command picks the commits between these 2 endpoints and asks you whether the selected commit is good or bad. It keeps narrowing down the commits until it finds the buggy commit.</p>
</li>
<li><p><strong>What is</strong> <code>git reflog</code><strong>?</strong></p>
<p><strong>Ans:</strong> It keeps track of every change made in a repository's reference. It shows deleted, renamed, and all other actions when executed.</p>
</li>
</ol>
<h1 id="heading-docker-questions"><em>Docker Questions</em></h1>
<ol>
<li><p><strong>What is Docker?</strong></p>
<p> <strong>Ans:</strong> Docker is an open-source containerization platform. It enables developers to package applications into containers. We have used Docker to build Docker images out of Docker files for lightweight application packaging.</p>
</li>
<li><p><strong>How are containers different from Virtual Machines?</strong></p>
<p> <strong>Ans:</strong> Containers are very lightweight as they do not have a complete OS and its utilities. Containers have very minimal OS functionality and only require dependencies.</p>
<p> Docker provides <em><mark>process-level isolation,</mark></em> whereas Virtual Machines provide stronger isolation.</p>
</li>
<li><p><strong>What is Docker Lifecycle?</strong></p>
<p> <strong>Ans:</strong> The general flow of the Docker lifecycle is as follows:</p>
<ul>
<li><p>Step 1: Create a Dockerfile with a set of instructions</p>
</li>
<li><p>Step 2: Building the Docker image from the Docker file</p>
</li>
<li><p>Step 3: Push the image to image registry like Docker Hub, ACR, and ECR</p>
</li>
<li><p>Step 4: Create the container out of the Docker image.</p>
</li>
</ul>
</li>
</ol>
<p>    A Docker image acts as a set of instructions to build a container, and it can be compared to a Snapshot in a VM.</p>
<ol start="4">
<li><p><strong>What are the different Docker components?</strong></p>
<p> <strong>Ans:</strong> Docker consists of several key components, such as:</p>
<ul>
<li><p><strong>Docker Engine</strong> which includes the Docker daemon for managing resources and the Docker CLI for user interaction.</p>
</li>
<li><p><strong>Docker Images</strong> serve as a blueprint for containers.</p>
</li>
<li><p><strong>Docker Containers</strong> are lightweight and portable instances for images.</p>
</li>
<li><p><strong>Docker Registry</strong> for storing images and sharing them.</p>
</li>
<li><p><strong>Docker Volume</strong> enables data persistence.</p>
</li>
<li><p><strong>Docker Network</strong> provides connectivity and isolation for containers.</p>
</li>
<li><p><strong>Docker Compose</strong> simplifies the management of multiple containers.</p>
</li>
</ul>
</li>
<li><p><strong>What is the difference between Docker</strong> <code>COPY</code> <strong>and</strong> <code>ADD</code><strong>?</strong></p>
<p> <strong>Ans:</strong> <code>docker ADD</code> can copy files from a URL unlike <code>Docker COPY</code> can only copy files from host system into the container.</p>
</li>
<li><p><strong>What is the difference between</strong> <code>CMD</code> <strong>and</strong> <code>Entrypoint</code> in Docker?</p>
</li>
</ol>
<p><strong>Ans:</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>CMD</strong></td><td><strong>ENTRYPOINT</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Specifies the <mark>default</mark> command to execute when a container starts.</td><td>Specifies the main command that <mark>always executes </mark> when the container starts.</td></tr>
<tr>
<td>It can be <mark>overridden </mark> at runtime by providing an additional argument during docker run.</td><td>It is less likely to be overridden as it is treated as the container’s <mark>primary process</mark>.</td></tr>
<tr>
<td>CMD is best for providing default arguments that can be changed at runtime.</td><td>ENTRYPOINT is ideal for primary non-negotiable processes like a service or a script.</td></tr>
</tbody>
</table>
</div><p>We mostly use ENTRYPOINT and CMD together for better flexibility.</p>
<ol start="7">
<li><p><strong>What is the default networking in Docker, and explain networking types?</strong></p>
<p> <strong>Ans:</strong> Docker supports several networking types; however <strong><mark>bridge network </mark></strong> <mark>is the default</mark> networking in Docker. The following is a list of networking types that Docker supports:</p>
<ul>
<li><p><strong><em>Bridge Network (Default):</em></strong></p>
<p>  A private network where containers can communicate with each other through their IP addresses. It is ideal for standalone containers needing limited communication with the host or external network. Here, containers get a private IP address, and you can use -p or —publish to expose their services.</p>
<p>  <code>docker network ls</code> Can show you the available networks</p>
</li>
<li><p><strong><em>Host Network:</em></strong></p>
<p>  A host network <mark>removes the network’s isolation between the container and the host.</mark> The container directly uses the host’s network stack. It is ideal for performance-sensitive applications where networking overhead needs to be minimized. No additional port mapping required, anyone having access to the host IP can access the application with <code>hostip:port</code> .</p>
<p>  <code>docker run —network host my-container</code> The command can be used to run a container with the host network.</p>
</li>
<li><p><strong><em>Overlay Network:</em></strong></p>
<p>  It allows communication between containers across multiple Docker hosts in a swarm or K8S cluster. It is ideal for <mark>distributed systems or multi-host environments</mark>.</p>
<p>  <code>docker network create -d overlay my-overlay-network</code> Command is used to create an overlay network.</p>
</li>
<li><p><strong><em>MacVLAN Network:</em></strong></p>
<p>  It assigns containers their own <mark>MAC address </mark> and allows them to <mark>appear as physical devices</mark> on the network. It is ideal for a legacy application requiring direct access to the physical network.</p>
<p>  <code>docker network create -d macvlan —subnet=192.168.10/24 my-macvlan-network</code> This command can be used to create a MacVLAN network.</p>
</li>
<li><p><strong><em>Custom-Bridge Network:</em></strong></p>
<p>  Similar to the default bridge network, but allows user-defined configuration for better control. Containers on the same custom bridge network can resolve each other by container names.</p>
<p>  <code>docker network create my-bridgenetwork</code> This command can be used to create a custom network.</p>
</li>
<li><p><strong><em>None-Network:</em></strong></p>
<p>  It disables networking for the container completely and is useful for security purposes or a container that doesn't require networking</p>
<p>  <code>docker run —network none mycontainer</code></p>
</li>
</ul>
</li>
<li><p><strong>Can you explain how to isolate networking between containers?</strong></p>
<p> <strong>Ans:</strong> By default, all the containers get deployed on the bridge network, which is Docker eth0. To isolate and secure a container, you can create a customer bridge network and assign it to the targeted container.</p>
</li>
<li><p><strong>What is a multistage build in Docker?</strong></p>
<p> <strong>Ans:</strong> Multistage build in Docker allows you to build your Docker container in multiple stages, allowing you to copy only necessary artifacts from one stage to another. The major advantage is that it helps build lightweight containers.</p>
<p> A multistage build contains multiple <code>FORM</code> instructions in a single Docker file, for example: you can compile the application in one of the stages (development stage) and copy only the compiled binary to a lightweight runtime image in the final stage. This reduces image size, improves security, and simplifies the build process.</p>
</li>
<li><p><strong>What are distro-less images in Docker?</strong></p>
<p><strong>Ans:</strong> Distro-less images are minimalist Docker images that include only the application and it required runtime by excluding the OS package managers, shell, and other unnecessary utilities.</p>
<p>They offer smaller image sizes, enhance security by reducing attack surface, and ensure consistency in the production environment.</p>
<p><strong>NOTE:</strong> Since distro-less images lack system native tools and utilities, the debugging should be done at the application or build pipeline level.</p>
<pre><code class="lang-yaml"><span class="hljs-comment">#Stage1: build the java application</span>
<span class="hljs-string">FROM</span> <span class="hljs-string">maven:3.8.7-openjdk-17</span> <span class="hljs-string">as</span> <span class="hljs-string">Builder</span>
<span class="hljs-string">WORKDIR</span> <span class="hljs-string">/app</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">pom.xml</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">src</span> <span class="hljs-string">./src</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">mvn</span> <span class="hljs-string">package</span>
<span class="hljs-comment">#Stage2: Use distro-less for runtime</span>
<span class="hljs-string">FROM</span> <span class="hljs-string">gcr.io/distroless/java:17</span>
<span class="hljs-string">WORKDIR</span> <span class="hljs-string">/app</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">--form=builder</span> <span class="hljs-string">/app/target/myapp.jar</span>
<span class="hljs-string">CMD</span> [<span class="hljs-string">"myapp.jar"</span>]
</code></pre>
</li>
<li><p><strong>What are some real-time challenges with Docker?</strong></p>
<p><strong>Ans:</strong> Some challenges and disadvantages of Docker are stated as follows:</p>
<ul>
<li><p>Docker is a single daemon process, which can cause a single point of failure. In case the Docker daemon goes down, then all the applications will face downtime.</p>
</li>
<li><p>Docker daemon runs as the root user, which is a security threat. Any process running with root privileges can have an adverse effect when it is compromised, as it can affect other applications and containers on the host.</p>
</li>
<li><p>If you are running too many containers on a single host, then you may experience issues with resource constraints. This can result in slow performance or crashes.</p>
</li>
</ul>
</li>
<li><p><strong>What step would you take to secure containers?</strong></p>
<p><strong>Ans:</strong> We can take the following safety measures to secure containers:</p>
<ul>
<li><p>Use distro-less images so there are fewer chances of CVE or security issues</p>
</li>
<li><p>Ensure that the networking is configured properly. This is one of the common reasons for security issues. If required, configure a custom bridge network and assign it to isolate containers.</p>
</li>
<li><p>Use utilities like Docker Scout to scan your container images.</p>
</li>
</ul>
</li>
<li><p><strong>You have noticed many stopped containers and unused networks taking up space. Describe how</strong> you would <strong>clean up these resources effectively.</strong></p>
<p><strong>Ans:</strong> The <code>docker prune</code> command can be used to remove unused resources.</p>
<ul>
<li><p><code>docker image prune</code>: to remove unused images</p>
</li>
<li><p><code>docker container prune</code>: to remove containers</p>
</li>
<li><p><code>docker volume prune</code>: to remove volumes</p>
</li>
<li><p><code>docker network prune</code>: to remove networks</p>
</li>
</ul>
</li>
</ol>
<p>    Running <code>docker system prune</code> combines all the above commands and cleans up all resources that are <mark>not associated with any running containers</mark>.</p>
<ol start="14">
<li><p><strong>You are working on a project that requires Docker containers to persistently store data. How would you handle persistent storage in Docker?</strong></p>
<p><strong>Ans:</strong> Storing data persistently means saving, accessing and reusing the data post the containers are destroyed. This can be achieved in the following 2 ways</p>
<p><strong><em>Docker Volume:</em></strong></p>
<p>Docker volume can be used for persistent storage. They are managed by Docker and can be attached to one or more containers. Docker volumes are more reliable and efficient for persisting data <mark>across the container lifecycle</mark>. These are easy to back up and can be maintained independently, even if a container is stopped or removed.</p>
<p>Example: in a production environment, you might create a named volume to persist database data like this</p>
<p>—&gt; Create Docker Volume: <code>docker volume create db_data</code></p>
<p>—&gt; Attach the volume to the container whose data needs to be persisted: <code>docker run -d —name MySQL -v db_data: /var/lib/mysql mysql:latest</code></p>
<p>This ensures the database data remains intact even if the container is stopped.</p>
<p><strong><em>Bind Mounts:</em></strong></p>
<p>In some cases, we can use bind mounts where a directory of the host machine is mapped directly to a directory in the container. It is useful in a development environment where you need immediate synchronization between host changes and containers.</p>
</li>
<li><p><strong>A company wants to run thousands of containers. Is there any limit on how many containers you can run in Docker?</strong></p>
<p><strong>Ans:</strong> There is no limit when it comes to running containers. The only limit is the hardware or machine limit. If the machine does not have enough CPU, it will not be able to launch the containers.</p>
</li>
<li><p><strong>You are managing a Docker environment and need to ensure that each container operates within defined CPU and memory limits. How do you limit the CPU and memory of a Docker container?</strong></p>
<p><strong>Ans:</strong> Docker allows you to limit the CPU and memory usage for a container using resource constraints. You can set the CPU limit using the <code>--CPU</code> and <code>--memory</code> options when running the container using the docker run command.</p>
<p><strong>Example</strong>: <code>docker run --cpu 2 --memory 1g mycontainer</code></p>
</li>
<li><p><strong>Can you define the resource limit in the Docker file? Or are there any other ways you can limit resource usage?</strong></p>
<p><strong>Ans:</strong> Resource constraints, such as limiting memory and CPU, <mark>cannot be defined </mark> directly in a Docker file. A Docker file is meant for defining build instructions of the image, not runtime behaviors. Instead, the constraints can be applied in the configuration file of orchestrators like Docker Compose and Kubernetes.</p>
<p><code>docker run --cpu=”15” --memory=”512m” my image</code></p>
<p>Alternatively, using a K8S pod definition, can set a limit like below:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">resources:</span>
    <span class="hljs-attr">limit:</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">"512m"</span>
        <span class="hljs-attr">cpu:</span> <span class="hljs-string">"1500m"</span>
</code></pre>
</li>
<li><p><strong>What is the difference between a Docker container and a Kubernetes pod?</strong></p>
<p><strong>Ans:</strong> A docker container is a lightweight and <mark>single instance</mark> of an application. It is managed by docker and <mark>provides process level isolation</mark>.</p>
<p>On the other hand, a Kubernetes pod is a high-level abstraction that can contain one or more Docker containers(or other container runtimes).</p>
</li>
<li><p><strong>How would you debug issues in a Docker container?</strong></p>
<p><strong>Ans:</strong> There are several techniques to debug issues in a Docker container:</p>
<ul>
<li><p><strong>Logging:</strong> Docker captures the standard output and error logs of a container making it easy to inspect using the docker logs command. So initially, we’ll check the logs of the container.</p>
</li>
<li><p><strong>Shell Access:</strong> We can access a running container’s shell using the <code>docker exec</code> command, which allows you to investigate and troubleshoot issues interactively.</p>
</li>
<li><p><strong>Image Inspection:</strong> You can inspect the Docker image content and configuration using <code>docker image inspect</code> to check if there is any misconfiguration.</p>
</li>
<li><p><strong>Health Check:</strong> Docker supports defining health checks for containers, allowing you to monitor health status and automatically restart or take actions using pre-defined conditions.</p>
</li>
<li><p><strong>Tools:</strong> Additionally, if containers are configured with ELK or Prometheus, then we can make use of these tools to troubleshoot better.</p>
</li>
</ul>
</li>
<li><p><strong>Can you describe a situation where you optimized a Dockerfile for faster build times or smaller image size?</strong></p>
<p><strong>Ans:</strong> Optimizing Dockerfile could involve various stages like using smaller base images (Alpine images or distro-less images), reducing the number of layers by combining commands, or using multistage builds to exclude unnecessary files from the final image.</p>
</li>
<li><p><strong>How do you create a multi-stage build in Docker?</strong></p>
<p><strong>Ans:</strong> Multi-stage Docker builds allow you to create optimized images by leveraging multiple build stages. To create a multi-stage build, you define multiple <code>FORM</code> instructions in the Docker file, each representing a different build stage and the subsequent stage copies only required artifacts from the previous build using <code>COPY --from</code> the instruction. This technique reduces image size by excluding build tools and dependencies from the final image. Each stage can use a different base image.</p>
</li>
<li><p><strong>How do you create a custom Docker network?</strong></p>
<p><strong>Ans:</strong> To create a custom Docker network, you can use <code>docker network create</code> command.</p>
</li>
<li><p><strong>An update needs to be applied without any data loss. How would you update a Docker container without losing data?</strong></p>
<p><strong>Ans:</strong> Steps to update Docker container without losing data are as follows:</p>
<ul>
<li><p>Check where the application stores the data (i.e.,/var/lib/mysql)</p>
</li>
<li><p>use <code>docker inspect &lt;container_name&gt;</code> to identify mounted volumes or bind mounts. Look for the <mark>mount</mark> <mark>section</mark> in the output to locate the data directory.</p>
</li>
<li><p>If it is a bind-mount method, then use the docker cp command to copy data from the container to the host machine <code>docker cp &lt;container name&gt;:&lt;path in container&gt; &lt;path on host&gt;</code> .</p>
</li>
<li><p>If it is the Docker volume method, then you can create a tarball for the volume contents by running a <mark>temporary container to access the volume and then tar it</mark>.</p>
</li>
<li><p>For databases, we can use application-specific backup tools like <mark>mysql dump for mysql or pgdump for PostgreSQL.</mark></p>
</li>
<li><p>Then, stop the container using the docker stop command.</p>
</li>
<li><p>Pull the latest version of the container using docker pull.</p>
</li>
<li><p>Start a new container with the latest image, making sure to map any volumes or bind mounts and make sure the container is working fine.</p>
</li>
</ul>
</li>
<li><p><strong>How do you secure Docker containers?</strong></p>
<p><strong>Ans: A</strong> Few ways to secure containers are as follows:</p>
<ul>
<li><p>Use distro-less images having fewer packages and dependencies.</p>
</li>
<li><p>Use custom networking instead of the default bridge network.</p>
</li>
<li><p>Use image scanning tools like Docker Scout.</p>
</li>
</ul>
</li>
<li><p><strong>Have you ever used Docker in combination with CI/CD?</strong></p>
<p><strong>Ans:</strong> Yes, in all the projects we use Docker with Azure DevOps CI/CD. For example, in the current project, we have integrated Docker with Azure DevOps in the build stage. We have used <code>Docker@2</code> task in a build job in Azure DevOps and push it to ACR.</p>
<p>We have further made it pull to AKS for deployment. We have also configured Azure App services with the container.</p>
</li>
<li><p><strong>How do you monitor Docker containers?</strong></p>
<p><strong>Ans:</strong> There are various ways to monitor containers, such as:</p>
<ul>
<li><p>Using Docker built-in commands such as docker stats and docker container stats</p>
</li>
<li><p>In our project set Docker container monitoring is primarily handled by a dedicated monitoring team using the ELK stack.</p>
</li>
<li><p>For instance, I ensured all containers had proper log routing to the monitoring teams’ ELK stack by configuring log drivers and standardizing log formats</p>
</li>
<li><p>The monitoring team created Kibana to visualize container performance, and I ensured that each container is configured properly for logging by setting up log drivers or editing the <code>/etc/docker.daemon.json</code> configuration to make it effective globally. This configuration was performed via Ansible during host VM configuration.</p>
</li>
<li><p>I also work closely with the monitoring team to define relevant metrics such as resource usage, errors, and access logs.</p>
</li>
</ul>
</li>
</ol>
<h1 id="heading-terraform-questions">Terraform Questions</h1>
<ol>
<li><p><strong>What is Terraform, and how does it work?</strong></p>
<p> <strong>Ans:</strong> Terraform is an IAC(Infrastructure as Code) tool that lets you write code to define and manage your cloud and on-prem infrastructure. With Terraform, you describe the desired state of your infrastructure using Terraform manifests, and Terraform figures out how to achieve that state, and then it interacts with cloud providers to achieve this state. As part of this, Terraform also generates a state file.</p>
</li>
<li><p><strong>A DevOps Engineer manually created infrastructure on Azure, and there is a requirement to manage that resource using Terraform. How would you import it into Terraform code?</strong></p>
<p> <strong>Ans:</strong> This situation is called configuration drift, and yes, we can manage manually created resources via Terraform following the steps below:</p>
<ul>
<li><p>Write the Terraform configuration code for each resource you want to import.</p>
</li>
<li><p>Run the terraform import command for each resource, specifying the resource type and its unique identifier.</p>
<p>  <code>terraform import azurerm_resource_group.example /subscriptions/&lt;SUBSCRIPTION_ID&gt;/resourceGroups/rg-dev-apps</code> .</p>
</li>
<li><p>Verify the import using <code>terraform show</code> command will show the latest state.</p>
</li>
<li><p>Run <code>terraform plan</code> to see if there are any discrepancies between the imported state and your configuration.</p>
</li>
</ul>
</li>
<li><p><strong>You have multiple environments, such as dev, qa, staging, and prod, for your application, and you want to use the same code for all these environments. How would you do that?</strong></p>
<p> <strong>Ans:</strong> There are multiple ways we can achieve this, and a few of them will be discussed here:</p>
<ul>
<li><p><strong><em>Using different</em></strong> <code>.tfvars</code> <strong><em>files:</em></strong> You can define a separate <code>.tfvars</code> file for each environment containing environment-specific variables, and <mark>during the </mark> <code>terraform apply</code> <mark>time</mark> you can refer to the respective variable file with the following command:</p>
<p>  <code>terraform apply -var-file=”dev.tfvars”</code></p>
<p>  <code>terraform apply -var-file=”qa.tfvars”</code></p>
</li>
<li><p><strong><em>Using Workspace:</em></strong> The Terraform workspace allows you to manage state files within the same configuration. Each workspace represents a separate environment with its respective state file. We can create workspaces using <code>terraform workspace new &lt;workspacename&gt;</code> the command and switch to each workspace using <code>terraform workspace select &lt;workspacename&gt;</code>. And we can use <code>terraform. workspace</code> it as a variable like <code>terraform. Workspace</code> this in the Terraform manifest for referring to the environments.</p>
</li>
<li><p><strong><em>Using</em></strong> <em>a</em> <strong><em>dedicated directory structure for each environment:</em></strong> <em>we can create different directory structures for each environment, such as dev, qa, and uat, which will have their own manifests file, but we do not mostly use this as it might be redundant and require more maintenance.</em></p>
</li>
</ul>
</li>
<li><p><strong>What is a Terraform state file, and why is it important?</strong></p>
<p> <strong>Ans:</strong> Terraform state file is a JSON or binary file that stores the current state of the managed infrastructure. It is the heart of Terraform. <strong>It is like a blueprint that stores information about the infrastructure you manage.</strong></p>
<p> It is crucial because it helps Terraform to understand what is already set up and what changes need to be made to the existing infrastructure by comparing the current state and desired state.</p>
</li>
<li><p><strong>A DevOps Engineer accidentally deleted the state file. What step should be taken to resolve this?</strong></p>
<p> <strong>Ans: The</strong> following steps can be taken to resolve this issue:</p>
<ul>
<li><p><strong><em>Recover Backup:</em></strong> If available, restore the state file from the recent backup. When the terraform state is managed locally, a <code>terraform.tfstate.backup</code> file is created every time someone performs terraform apply. You can simply rename <code>terraform.tfstate.backup</code> to <code>terraform.tfstate</code> The latest state file has been recovered this way.</p>
</li>
<li><p><strong><em>Recreate the State file:</em></strong> If no backup is available, then we have to manually reconstruct the stateful by inspecting existing infrastructure and using terraform import for each missing resource. This is not the ideal way.</p>
</li>
</ul>
</li>
<li><p><strong>What are the best practices for managing the Terraform state file?</strong></p>
<p> <strong>Ans:</strong> The following are some best practices that we can follow while managing the Terraform state file:</p>
<ul>
<li><p><strong>Remote State Store:</strong> stores the state file remotely, which uses Azure Blob store and AWS S3 with Dynamo DB.</p>
</li>
<li><p><strong>State Locking:</strong> Enable state locking so that the state file will remain intact when multiple people try to make changes simultaneously.</p>
</li>
<li><p><strong>Backup:</strong> Enable automated backup for state file safety in case of accidental deletion.</p>
</li>
<li><p><strong>Access Control:</strong> Limit access to the state file to authorize users and services.</p>
</li>
<li><p><strong>Distinguished Environments:</strong> Finally, create a separate state file for each environment for better management and isolation.</p>
</li>
</ul>
</li>
</ol>
<p><strong>Your team is adopting a multi-cloud strategy, and you need to manage resources on both AWS and Azure using Terraform. How do you structure Terraform code to handle this?</strong></p>
<p><strong>Ans:</strong> Terraform is cloud agnostic, which means it supports multi-cloud setups.</p>
<p><strong><em>1. Use Separate Provider Blocks:</em></strong></p>
<p>Define providers for <strong>both clouds</strong> in your root <a target="_blank" href="http://main.tf"><code>main.tf</code></a> or relevant modules:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Azure provider</span>
<span class="hljs-string">provider</span> <span class="hljs-string">"azurerm"</span> {
  <span class="hljs-string">alias</span>   <span class="hljs-string">=</span> <span class="hljs-string">"azure"</span>
  <span class="hljs-string">features</span> {}
}

<span class="hljs-comment"># AWS provider</span>
<span class="hljs-string">provider</span> <span class="hljs-string">"aws"</span> {
  <span class="hljs-string">alias</span>  <span class="hljs-string">=</span> <span class="hljs-string">"aws"</span>
  <span class="hljs-string">region</span> <span class="hljs-string">=</span> <span class="hljs-string">"us-east-1"</span>
}
<span class="hljs-string">Isolate</span> <span class="hljs-string">State</span> <span class="hljs-string">Files</span> <span class="hljs-string">Per</span> <span class="hljs-string">Cloud</span>
</code></pre>
<p><strong><em>2. Keep separate</em></strong> <code>backend</code> <strong><em>configurations (e.g., S3 for AWS, Azure Blob for Azure):</em></strong></p>
<pre><code class="lang-yaml"><span class="hljs-string">terraform</span> {
  <span class="hljs-string">backend</span> <span class="hljs-string">"azurerm"</span> {
    <span class="hljs-comment"># for Azure</span>
  }
}
</code></pre>
<p>For AWS setup:</p>
<pre><code class="lang-yaml"><span class="hljs-string">terraform</span> {
  <span class="hljs-string">backend</span> <span class="hljs-string">"s3"</span> {
    <span class="hljs-comment"># for AWS</span>
  }
}
</code></pre>
<p>You can also use <strong>workspaces</strong> or <strong>separate pipelines</strong> to manage them.</p>
<p><strong><em>3. Use Workspaces or CI/CD to Manage Environments:</em></strong></p>
<p>For <code>dev</code>, <code>qa</code>, <code>prod</code> across clouds, use:</p>
<ul>
<li><p>Named workspaces</p>
</li>
<li><p>Separate <code>*.tfvars</code> per environment</p>
</li>
<li><p>Environment-specific modules</p>
</li>
</ul>
<p><strong><em>4. Separate Azure and AWS infrastructure into modules:</em></strong></p>
<pre><code class="lang-yaml"><span class="hljs-string">terraform/</span>
<span class="hljs-string">├──</span> <span class="hljs-string">main.tf</span>
<span class="hljs-string">├──</span> <span class="hljs-string">providers.tf</span>
<span class="hljs-string">├──</span> <span class="hljs-string">aws/</span>
<span class="hljs-string">│</span>   <span class="hljs-string">└──</span> <span class="hljs-string">ec2-instance/</span>
<span class="hljs-string">│</span>       <span class="hljs-string">└──</span> <span class="hljs-string">main.tf</span>
<span class="hljs-string">├──</span> <span class="hljs-string">azure/</span>
<span class="hljs-string">│</span>   <span class="hljs-string">└──</span> <span class="hljs-string">storage-account/</span>
<span class="hljs-string">│</span>       <span class="hljs-string">└──</span> <span class="hljs-string">main.tf</span>
<span class="hljs-string">├──</span> <span class="hljs-string">variables.tf</span>
<span class="hljs-string">└──</span> <span class="hljs-string">terraform.tfvars</span>
</code></pre>
<p>Then call them like this in the root module:</p>
<pre><code class="lang-yaml"><span class="hljs-string">module</span> <span class="hljs-string">"azure_storage"</span> {
  <span class="hljs-string">source</span>   <span class="hljs-string">=</span> <span class="hljs-string">"./azure/storage-account"</span>
  <span class="hljs-string">providers</span> <span class="hljs-string">=</span> {
    <span class="hljs-string">azurerm</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm.azure</span>
  }
  <span class="hljs-comment"># pass variables</span>
}

<span class="hljs-string">module</span> <span class="hljs-string">"aws_ec2"</span> {
  <span class="hljs-string">source</span>   <span class="hljs-string">=</span> <span class="hljs-string">"./aws/ec2-instance"</span>
  <span class="hljs-string">providers</span> <span class="hljs-string">=</span> {
    <span class="hljs-string">aws</span> <span class="hljs-string">=</span> <span class="hljs-string">aws.aws</span>
  }
  <span class="hljs-comment"># pass variables</span>
}
</code></pre>
<ol start="7">
<li><p><strong>You want to run some shell scripts after creating your resources with Terraform, so how would you achieve this?</strong></p>
<p> <strong>Ans:</strong> You can achieve this using provisioners. There are 3 types of provisions in Terraform:</p>
<ul>
<li><p><strong><em>Local Exec Provisioners:</em></strong> This provisioner executes commands and scripts locally on your local machine.</p>
</li>
<li><p><strong><em>Remote Exec Provisioners:</em></strong> It executes commands and scripts on a provisioned remote machine.</p>
</li>
<li><p><strong><em>File Provisioners:</em></strong> It moves files to the provisioned resources.</p>
</li>
</ul>
</li>
</ol>
<p>    In this given scenario, we can use File Provisioners and Remote Exec Provisioners combined, which will copy the script file to the remote server and execute the same, respectively.</p>
<ol start="8">
<li><p><strong>Your company is looking to enable High Availability. How can you perform blue-green deployment using Terraform?</strong></p>
<p> <strong>Ans:</strong> To implement a blue-green deployment using Terraform, you can provision two identical environments—commonly referred to as <strong>blue</strong> and <strong>green</strong>—using infrastructure components like <strong>Virtual Machine Scale Sets (VMSS)</strong> in Azure or <strong>Auto Scaling Groups (ASG)</strong> in AWS.</p>
<p> Once the new (green) environment is deployed and thoroughly tested, you can switch traffic from the existing (blue) environment to the green one by updating the <strong>Load Balancer backend pool</strong> or modifying <strong>DNS records</strong>. This approach minimizes downtime and allows easy rollback if issues are detected.</p>
</li>
<li><p><strong>Your company wants to automate Terraform through CI/CD pipeline. How can you integrate Terraform with CI/CD pipelines?</strong></p>
</li>
</ol>
<p><strong>Ans:</strong> Integrating Terraform with a CI/CD pipeline involves setting up an automated workflow that handles infrastructure provisioning and management consistently. Here's a step-by-step guide to achieve this:</p>
<p><strong><em>Step 1: Store Terraform Code in a Version Control System (VCS):</em></strong></p>
<ul>
<li><p>Push your Terraform configuration files (<code>.tf</code> files) to a source code repository such as <strong>GitHub</strong>, <strong>Azure Repos</strong>, or <strong>GitLab</strong>.</p>
</li>
<li><p>Follow a branching strategy (e.g., feature, staging, production branches) to manage infrastructure changes.</p>
<p>  <strong><em>Step 2: Set Up Remote Backend for State Management:</em></strong></p>
</li>
<li><p>Use a remote backend like <strong>Azure Storage Account</strong>, <strong>AWS S3 with DynamoDB</strong>, or <strong>Terraform Cloud</strong> to manage the Terraform state file securely and support collaboration.</p>
<p>  <strong><em>Step 3: Define CI/CD Pipeline Configuration:</em></strong></p>
</li>
<li><p>Create a pipeline file (e.g., <code>azure-pipelines.yml</code>, <code>.gitlab-ci.yml</code>, or GitHub Actions workflow) with Terraform stages like:</p>
<ul>
<li><p><strong>Terraform Init</strong></p>
</li>
<li><p><strong>Terraform Validate</strong></p>
</li>
<li><p><strong>Terraform Plan</strong></p>
</li>
<li><p><strong>Terraform Apply</strong> (manual approval for production)</p>
</li>
</ul>
</li>
</ul>
<p>    <strong><em>Step 4: Configure Pipeline Agent and Environment:</em></strong></p>
<ul>
<li><p>Use a <strong>self-hosted agent</strong> or <strong>cloud-hosted agent</strong> with Terraform installed.</p>
</li>
<li><p>Set up environment variables or secrets to securely pass cloud provider credentials (e.g., Azure service principal, AWS access keys).</p>
</li>
</ul>
<p>    <strong><em>Step 5: Implement Pipeline Steps:</em></strong></p>
<p>    Typical pipeline stages:</p>
<ol>
<li><p><strong>Terraform Init</strong> – Initialize the working directory.</p>
</li>
<li><p><strong>Terraform Validate</strong> – Check for syntax or configuration errors.</p>
</li>
<li><p><strong>Terraform Plan</strong> – Show the execution plan of proposed changes.</p>
</li>
<li><p><strong>Terraform Apply</strong> – Apply the changes (with manual approval if needed).</p>
</li>
<li><p>(Optional) <strong>Terraform Destroy</strong> – Clean up resources after testing.</p>
</li>
</ol>
<p><strong><em>Step 6: Add Approval Gates and Environment Controls:</em></strong></p>
<ul>
<li><p>Use <strong>manual approvals</strong> or <strong>release gates</strong> to control changes to production.</p>
</li>
<li><p>Configure role-based access to ensure only authorized personnel can approve or apply changes.</p>
</li>
</ul>
<p><strong><em>Step 7: Monitor and Audit:</em></strong></p>
<ul>
<li><p>Enable logging and notification integrations (e.g., Slack, Microsoft Teams).</p>
</li>
<li><p>Track infrastructure changes via version control and pipeline run history.</p>
</li>
</ul>
<ol start="10">
<li><p><strong>Describe how you can use Terraform with configuration management tools like Ansible.</strong></p>
<p><strong>Ans:</strong> Terraform and Ansible complement each other in infrastructure automation workflows. While <strong>Terraform</strong> is used for <strong>provisioning infrastructure</strong>, <strong>Ansible</strong> is used for <strong>configuring the provisioned resources</strong>.</p>
<p>In our project, we use <strong>Terraform to provision virtual machines (VMs)</strong> and other infrastructure components. Once the VMs are up and running, <strong>Ansible takes over to configure these servers</strong>—installing and setting up various applications and agents such as <strong>ELK Stack, Rapid7, Arctic Wolf</strong>, and others.</p>
<p>This integration allows us to:</p>
<ul>
<li><p>Automate end-to-end provisioning and configuration.</p>
</li>
<li><p>Maintain separation of concerns—Terraform manages infrastructure, Ansible manages software.</p>
</li>
<li><p>Achieve faster and more consistent deployments across environments.</p>
</li>
</ul>
</li>
<li><p><strong>Your infrastructure contains database passwords and other sensitive information. How can you manage secrets and sensitive data in Terraform?</strong></p>
</li>
</ol>
<p><strong>Ans:</strong> Managing sensitive data securely is crucial when using Terraform. Below are best practices and methods we follow to handle secrets:</p>
<p><strong><em>Use Azure Key Vault (or other Secret Managers):</em></strong></p>
<ul>
<li><p>We primarily use <strong>Azure Key Vault</strong> to store and retrieve secrets like database passwords, API keys, and certificates securely.</p>
</li>
<li><p>Terraform can be configured to fetch secrets from Key Vault dynamically during execution.</p>
</li>
</ul>
<p><strong><em>Use Terraform Input Variables with</em></strong> <code>sensitive = true</code> :</p>
<ul>
<li><p>Marking variables as <code>sensitive</code> prevents Terraform from displaying them in logs or plan outputs.</p>
</li>
<li><p>Avoid hardcoding values directly into the <code>.tf</code> files.</p>
</li>
</ul>
<p><strong><em>Use Environment Variables for Sensitive Inputs:</em></strong></p>
<ul>
<li>Environment variables (e.g., <code>TF_VAR_db_password</code>) can be used to pass sensitive data securely to Terraform at runtime.</li>
</ul>
<p><strong><em>Never Hardcode Secrets in Terraform Files:</em></strong></p>
<ul>
<li>Always use variables or external secret sources instead of embedding secrets directly in <code>.tf</code> manifests.</li>
</ul>
<p><strong><em>Protect Sensitive Files with</em></strong> <code>.gitignore</code> <strong><em>:</em></strong></p>
<ul>
<li>Add files containing secrets (like <code>*.pem</code>, <code>.env</code>, SSH keys) to your <code>.gitignore</code> to ensure they are not committed to version control.</li>
</ul>
<p><strong><em>Implement Linters and Pre-Commit Hooks:</em></strong></p>
<ul>
<li>Use tools like <code>tflint</code>, <code>tfsec</code>, or <code>pre-commit</code> hooks to detect and prevent accidental inclusion of sensitive data in code.</li>
</ul>
<ol start="12">
<li><p><strong>How can you specify dependencies between resources in Terraform?</strong></p>
<p><strong>Ans:</strong> In Terraform, dependencies between resources are typically handled automatically through <strong>implicit dependencies</strong> based on references. However, when more control is needed, you can define <strong>explicit dependencies</strong>. Here’s how both methods work:</p>
<ul>
<li><p><strong><em><mark>Implicit Dependencies </mark> (Recommended):</em></strong></p>
<p>  Terraform automatically understands the dependency graph when one resource references another.</p>
<p>  <strong>Example:</strong> A <strong>Virtual Machine</strong> that depends on a <strong>Network Interface</strong>, which in turn depends on a <strong>Subnet</strong> and <strong>Virtual Network</strong>.</p>
<pre><code class="lang-yaml">  <span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_virtual_network"</span> <span class="hljs-string">"vnet"</span> {
    <span class="hljs-string">name</span>                <span class="hljs-string">=</span> <span class="hljs-string">"my-vnet"</span>
    <span class="hljs-string">address_space</span>       <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.0.0/16"</span>]
    <span class="hljs-string">location</span>            <span class="hljs-string">=</span> <span class="hljs-string">"East US"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
  }

  <span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet"</span> <span class="hljs-string">"subnet"</span> {
    <span class="hljs-string">name</span>                 <span class="hljs-string">=</span> <span class="hljs-string">"my-subnet"</span>
    <span class="hljs-string">resource_group_name</span>  <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">virtual_network_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_virtual_network.vnet.name</span>
    <span class="hljs-string">address_prefixes</span>     <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.1.0/24"</span>]
  }

  <span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_interface"</span> <span class="hljs-string">"nic"</span> {
    <span class="hljs-string">name</span>                <span class="hljs-string">=</span> <span class="hljs-string">"my-nic"</span>
    <span class="hljs-string">location</span>            <span class="hljs-string">=</span> <span class="hljs-string">"East US"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

    <span class="hljs-string">ip_configuration</span> {
      <span class="hljs-string">name</span>                          <span class="hljs-string">=</span> <span class="hljs-string">"internal"</span>
      <span class="hljs-string">subnet_id</span>                     <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.subnet.id</span>
      <span class="hljs-string">private_ip_address_allocation</span> <span class="hljs-string">=</span> <span class="hljs-string">"Dynamic"</span>
    }
  }

  <span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_linux_virtual_machine"</span> <span class="hljs-string">"vm"</span> {
    <span class="hljs-string">name</span>                  <span class="hljs-string">=</span> <span class="hljs-string">"my-vm"</span>
    <span class="hljs-string">resource_group_name</span>   <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">location</span>              <span class="hljs-string">=</span> <span class="hljs-string">"East US"</span>
    <span class="hljs-string">size</span>                  <span class="hljs-string">=</span> <span class="hljs-string">"Standard_B1s"</span>
    <span class="hljs-string">admin_username</span>        <span class="hljs-string">=</span> <span class="hljs-string">"azureuser"</span>
    <span class="hljs-string">network_interface_ids</span> <span class="hljs-string">=</span> [<span class="hljs-string">azurerm_network_interface.nic.id</span>]
    <span class="hljs-comment"># other configuration...</span>
  }
</code></pre>
<p>  Terraform knows the VM depends on the NIC, which depends on the subnet and VNet — all handled <strong>implicitly</strong></p>
</li>
<li><p><strong><em><mark>Explicit Dependencies </mark> (Using</em></strong> <code>depends_on</code><strong><em>):</em></strong></p>
<p>  If there’s no direct reference but a dependency still exists, use the <code>depends_on</code> argument.</p>
<p>  <strong>Example:</strong> You want a Public IP to be created <strong>only after</strong> a Network Security Group (NSG), even though there’s no direct reference:</p>
<pre><code class="lang-yaml">  <span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_group"</span> <span class="hljs-string">"nsg"</span> {
    <span class="hljs-string">name</span>                <span class="hljs-string">=</span> <span class="hljs-string">"my-nsg"</span>
    <span class="hljs-string">location</span>            <span class="hljs-string">=</span> <span class="hljs-string">"East US"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-comment"># rules...</span>
  }

  <span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_public_ip"</span> <span class="hljs-string">"public_ip"</span> {
    <span class="hljs-string">name</span>                <span class="hljs-string">=</span> <span class="hljs-string">"my-public-ip"</span>
    <span class="hljs-string">location</span>            <span class="hljs-string">=</span> <span class="hljs-string">"East US"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">allocation_method</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Static"</span>

    <span class="hljs-string">depends_on</span> <span class="hljs-string">=</span> [<span class="hljs-string">azurerm_network_security_group.nsg</span>]
  }
</code></pre>
</li>
<li><p><strong><em>Module-Level Dependencies:</em></strong></p>
<p>  If using modules (e.g., separate modules for networking, compute, storage), use <code>depends_on</code> at the module level to control the order.</p>
<pre><code class="lang-yaml">  <span class="hljs-string">module</span> <span class="hljs-string">"network"</span> {
    <span class="hljs-string">source</span> <span class="hljs-string">=</span> <span class="hljs-string">"./modules/network"</span>
    <span class="hljs-comment"># outputs a subnet ID</span>
  }

  <span class="hljs-string">module</span> <span class="hljs-string">"vm"</span> {
    <span class="hljs-string">source</span>     <span class="hljs-string">=</span> <span class="hljs-string">"./modules/compute"</span>
    <span class="hljs-string">subnet_id</span>  <span class="hljs-string">=</span> <span class="hljs-string">module.network.subnet_id</span>
    <span class="hljs-string">depends_on</span> <span class="hljs-string">=</span> [<span class="hljs-string">module.network</span>]
  }
</code></pre>
<ul>
<li><p>Use <strong>references</strong> (like resource IDs or names) for implicit dependencies.</p>
</li>
<li><p>Use <code>depends_on</code> When there’s no direct reference, but a logical dependency exists.</p>
</li>
<li><p>Manage module dependencies carefully using <code>depends_on</code> and output values.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>You have 20 resources created through Terraform, but you want to delete only one of them. Is it possible to destroy a single resource out of multiple resources using Terraform?</strong></p>
<p><strong>Ans:</strong> Yes, Terraform allows you to <strong>destroy a specific resource</strong> without affecting the rest of your infrastructure. You can use the following command to destroy a single resource:</p>
<p><code>terraform destroy -target=&lt;resource_type.resource_name&gt;</code></p>
<p><strong><em>Example:</em></strong> If you want to destroy a specific Azure Virtual Machine: <code>terraform destroy -target=azurerm_linux_virtual_</code><a target="_blank" href="http://machine.my"><code>machine.my</code></a><code>_vm</code></p>
</li>
<li><p><strong>How can you create a particular type of resource multiple times without duplicating the code?</strong></p>
<p><strong>Ans: Y</strong>ou can use <code>count</code> or <code>for_each</code> in Terraform to create multiple instances of the same resource dynamically, without duplicating code.</p>
<ul>
<li><p><strong><em>Using</em></strong> <code>count</code> <strong><em>(Index-based iteration):</em></strong></p>
<p>  Use <code>count</code> when you want to create <strong>a fixed number</strong> of identical or similar resources.</p>
<p>  <strong>Example: Create 3 Azure Storage Accounts</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_storage_account"</span> <span class="hljs-string">"storage"</span> {
    <span class="hljs-string">count</span>                    <span class="hljs-string">=</span> <span class="hljs-number">3</span>
    <span class="hljs-string">name</span>                     <span class="hljs-string">=</span> <span class="hljs-string">"mystorage${count.index}"</span>
    <span class="hljs-string">resource_group_name</span>      <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">location</span>                 <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">account_tier</span>             <span class="hljs-string">=</span> <span class="hljs-string">"Standard"</span>
    <span class="hljs-string">account_replication_type</span> <span class="hljs-string">=</span> <span class="hljs-string">"LRS"</span>
  }
</code></pre>
</li>
<li><p><strong><em>Using</em></strong> <code>for_each</code> <strong><em>(Key-based iteration):</em></strong></p>
<p>  Use <code>for_each</code> When creating <strong>named resources</strong> or when working with maps and sets.</p>
<p>  <strong>Example: Create Storage Accounts using a map</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-string">variable</span> <span class="hljs-string">"storage_accounts"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> {
      <span class="hljs-string">sa1</span> <span class="hljs-string">=</span> <span class="hljs-string">"eastus"</span>
      <span class="hljs-string">sa2</span> <span class="hljs-string">=</span> <span class="hljs-string">"westus"</span>
    }
  }

  <span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_storage_account"</span> <span class="hljs-string">"storage"</span> {
    <span class="hljs-string">for_each</span>                 <span class="hljs-string">=</span> <span class="hljs-string">var.storage_accounts</span>
    <span class="hljs-string">name</span>                     <span class="hljs-string">=</span> <span class="hljs-string">each.key</span>
    <span class="hljs-string">location</span>                 <span class="hljs-string">=</span> <span class="hljs-string">each.value</span>
    <span class="hljs-string">resource_group_name</span>      <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">account_tier</span>             <span class="hljs-string">=</span> <span class="hljs-string">"Standard"</span>
    <span class="hljs-string">account_replication_type</span> <span class="hljs-string">=</span> <span class="hljs-string">"LRS"</span>
  }
</code></pre>
</li>
</ul>
</li>
<li><p><strong>What is the Terraform module registry?</strong></p>
<p><strong>Ans:</strong> The <strong>Terraform Module Registry</strong> is a <strong>centralized repository of reusable Terraform modules</strong> maintained by HashiCorp and the Terraform community. It provides a collection of pre-built, versioned, and well-documented modules that help you <strong>automate infrastructure provisioning</strong> without writing everything from scratch.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Streamline Microservices Deployment on AKS Using Azure DevOps CI/CD and GitOps]]></title><description><![CDATA[In this project, we’ll streamline the deployment of a sample voting application using Azure DevOps. Our target deployment environment will be AKS. This application is publicly available at the Docker Samples repository, and we aim to demonstrate its ...]]></description><link>https://www.devopswithritesh.in/streamline-microservices-deployment-on-aks-using-azure-devops-cicd-and-gitops</link><guid isPermaLink="true">https://www.devopswithritesh.in/streamline-microservices-deployment-on-aks-using-azure-devops-cicd-and-gitops</guid><category><![CDATA[azure-devops]]></category><category><![CDATA[gitops]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[aks]]></category><category><![CDATA[AKS,Azure kubernetes services]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Wed, 29 Jan 2025 07:28:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1742291041536/71f9efa5-a692-4a44-b10b-928161a8d51d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this project, we’ll streamline the deployment of a sample voting application using Azure DevOps. Our target deployment environment will be AKS. This application is publicly available at <a target="_blank" href="https://github.com/dockersamples/example-voting-app">the Docker Samples repository,</a> and we aim to demonstrate its CI/CD using Azure DevOps.</p>
<p>Here we’ll use all the Azure managed services such as:</p>
<ul>
<li><p>Azure Repos for version control</p>
</li>
<li><p>ACR (Azure Container Registry) for strong docker images</p>
</li>
<li><p>AKS (Azure Kubernetes Services) for deployment of the application to the K8S cluster</p>
</li>
</ul>
<h1 id="heading-setting-up-azure-repo">Setting up Azure Repo</h1>
<p>As a very first step we’ll import the publicly available code to Azure Repos inside our project and to do so below are the steps</p>
<h3 id="heading-import-code-from-the-public-github-repository-to-the-private-azure-repo">Import Code from the public GitHub repository to the Private Azure Repo</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737869713223/2d7d70f8-c874-4748-91a7-2a87f6014c67.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-verify-the-build-branch-is-configured-to-the-main-branch">Verify the build branch is configured to the <strong>Main branch</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737870100603/24429810-2018-4cc2-b0a5-ed337f13d355.png" alt class="image--center mx-auto" /></p>
<p>The main branch will have the latest code changes and our target build branch will be the main form where code will be checked out</p>
<h1 id="heading-create-acr-container-registry">Create ACR (Container Registry)</h1>
<p>Here we’ll create the resources using Azure CLI for faster deployment of resources.</p>
<ul>
<li><p><strong>Login to Azure CLI using</strong></p>
<p>  <code>az login</code></p>
</li>
<li><p><strong>Create Resource Group</strong></p>
<p>  <code>az group create --name votingapp-deploy --location uksouth</code></p>
</li>
<li><p><strong>Create the container registry</strong></p>
<p>  <code>az acr create --resource-group votingapp-deploy --name votingappacr001 --sku standard --location uksouth</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737872245615/b34a7dc3-37ab-4311-aba2-791d60b91e12.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-build-pipeline-creation-continuous-integration">Build Pipeline Creation (Continuous Integration)</h1>
<p>In this project we have <strong>3 microservices</strong> below is the architecture:</p>
<ul>
<li><p>Vote: this component is created using <strong>Python</strong></p>
</li>
<li><p>Worker: this component is created using <strong>.Net Core</strong></p>
</li>
<li><p>Result: this is created using <strong>NodeJS</strong></p>
</li>
<li><p>DB: <strong>Postgres</strong> db is used for storing data</p>
</li>
<li><p>Caching**: Redis** is used as in-memory caching</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737872513032/ea46472b-796e-4d0c-a1cd-de415b57d7b5.png" alt class="image--center mx-auto" /></p>
<p>Using this architecture, we will develop <strong>three separate build pipelines</strong> for each microservice. This approach will allow for the independent construction of applications, which aligns with the primary goal of microservice architecture.</p>
<ol>
<li><h2 id="heading-pipeline-for-result-microservice">Pipeline for Result Microservice</h2>
</li>
</ol>
<p>In the pipeline section, we’ll first connect the pipeline with <strong>Azure Repos Git</strong> followed by selecting the repository i.e. voting-application followed by the configure section where we can select a template for our application to get started with. Our application is a containerized application that needs to be integrated with the container registry, we’ll select the below template:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737882727421/c081b7ef-8fe9-4ecd-a464-b2e76d83123b.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Docker</span>
<span class="hljs-comment"># Build and push an image to Azure Container Registry</span>
<span class="hljs-comment"># https://docs.microsoft.com/azure/devops/pipelines/languages/docker</span>

<span class="hljs-attr">trigger:</span>
  <span class="hljs-attr">paths:</span>
    <span class="hljs-attr">include:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">result/*</span>

<span class="hljs-attr">resources:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">repo:</span> <span class="hljs-string">self</span>

<span class="hljs-attr">variables:</span>
  <span class="hljs-comment"># Container registry service connection established during pipeline creation</span>
  <span class="hljs-attr">dockerRegistryServiceConnection:</span> <span class="hljs-string">'XXXXXX-XXXXX-XXXXXXX'</span>
  <span class="hljs-attr">imageRepository:</span> <span class="hljs-string">'resultapp'</span>                <span class="hljs-comment"># only for result app</span>
  <span class="hljs-attr">containerRegistry:</span> <span class="hljs-string">'votingappacr001.azurecr.io'</span>
  <span class="hljs-attr">dockerfilePath:</span> <span class="hljs-string">'$(Build.SourcesDirectory)/result/Dockerfile'</span>   <span class="hljs-comment"># Docker file path for result app</span>
  <span class="hljs-attr">tag:</span> <span class="hljs-string">'$(Build.BuildId)'</span>

  <span class="hljs-comment"># Agent VM image name</span>
<span class="hljs-attr">pool:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">azureagent</span>

<span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">ImageBuild</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span> <span class="hljs-string">Result</span> <span class="hljs-string">Image</span>
  <span class="hljs-attr">jobs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Build</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span>
    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Docker@2</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span> <span class="hljs-string">Image</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">containerRegistry:</span> <span class="hljs-string">'$(dockerRegistryServiceConnection)'</span>
        <span class="hljs-attr">repository:</span> <span class="hljs-string">'$(imageRepository)'</span>
        <span class="hljs-attr">command:</span> <span class="hljs-string">'build'</span>
        <span class="hljs-attr">Dockerfile:</span> <span class="hljs-string">'result/Dockerfile'</span>
        <span class="hljs-attr">tags:</span> <span class="hljs-string">'$(tag)'</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">ImagePush</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">Push</span> <span class="hljs-string">Result</span> <span class="hljs-string">Image</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Push</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">Push</span> <span class="hljs-string">Image</span>
      <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Docker@2</span>
        <span class="hljs-attr">displayName:</span> <span class="hljs-string">Push</span> <span class="hljs-string">Image</span>
        <span class="hljs-attr">inputs:</span>
          <span class="hljs-attr">containerRegistry:</span> <span class="hljs-string">'$(dockerRegistryServiceConnection)'</span>
          <span class="hljs-attr">repository:</span> <span class="hljs-string">'$(imageRepository)'</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">'push'</span>
          <span class="hljs-attr">Dockerfile:</span> <span class="hljs-string">'result/Dockerfile'</span>
          <span class="hljs-attr">tags:</span> <span class="hljs-string">'$(tag)'</span>
</code></pre>
<ol start="2">
<li><h2 id="heading-pipeline-for-vote-microservice">Pipeline for Vote Microservice</h2>
<p> With this pipeline, we’ll follow a similar approach for building and pushing the vote microservice docker image to ACR. Here we have implemented a <strong>path</strong> filter for triggering the pipeline which means, the pipeline will trigger whenever there are any changes made to the <strong>vote</strong> microservice directory. The same approach has been implemented for the result microservice above as well</p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-comment"># Docker</span>
<span class="hljs-comment"># Build and push an image to Azure Container Registry</span>
<span class="hljs-comment"># https://docs.microsoft.com/azure/devops/pipelines/languages/docker</span>

<span class="hljs-attr">trigger:</span>
  <span class="hljs-attr">paths:</span>
    <span class="hljs-attr">include:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">vote/*</span>

<span class="hljs-attr">resources:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">repo:</span> <span class="hljs-string">self</span>

<span class="hljs-attr">variables:</span>
  <span class="hljs-comment"># Container registry service connection established during pipeline creation</span>
  <span class="hljs-attr">dockerRegistryServiceConnection:</span> <span class="hljs-string">'XXXXXX-XXXXXXXX-XXXXXXXXXX'</span>
  <span class="hljs-attr">imageRepository:</span> <span class="hljs-string">'votingapplication'</span>
  <span class="hljs-attr">containerRegistry:</span> <span class="hljs-string">'votingappacr002.azurecr.io'</span>
  <span class="hljs-attr">dockerfilePath:</span> <span class="hljs-string">'$(Build.SourcesDirectory)/vote/Dockerfile'</span>
  <span class="hljs-attr">tag:</span> <span class="hljs-string">'$(Build.BuildId)'</span>

  <span class="hljs-comment"># Agent VM image name</span>
<span class="hljs-attr">pool:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">azureagent</span>  <span class="hljs-comment"># Self-hosted agent</span>

<span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">ImageBuild</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span> <span class="hljs-string">Vote</span> <span class="hljs-string">Image</span>
  <span class="hljs-attr">jobs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Build</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span>
    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Docker@2</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span> 
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">containerRegistry:</span> <span class="hljs-string">'$(dockerRegistryServiceConnection)'</span>
        <span class="hljs-attr">repository:</span> <span class="hljs-string">'$(imageRepository)'</span>
        <span class="hljs-attr">command:</span> <span class="hljs-string">'build'</span>
        <span class="hljs-attr">Dockerfile:</span> <span class="hljs-string">'vote/Dockerfile'</span>
        <span class="hljs-attr">tags:</span> <span class="hljs-string">'$(tag)'</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">ImagePush</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">Push</span> <span class="hljs-string">Vote</span> <span class="hljs-string">Image</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Push</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">Push</span> <span class="hljs-string">Image</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Docker@2</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">containerRegistry:</span> <span class="hljs-string">'$(dockerRegistryServiceConnection)'</span>
            <span class="hljs-attr">repository:</span> <span class="hljs-string">'$(imageRepository)'</span>
            <span class="hljs-attr">command:</span> <span class="hljs-string">'push'</span>
            <span class="hljs-attr">Dockerfile:</span> <span class="hljs-string">'vote/Dockerfile'</span>
            <span class="hljs-attr">tags:</span> <span class="hljs-string">'$(tag)'</span>
</code></pre>
<ol start="3">
<li><h2 id="heading-pipeline-for-worker-microservice">Pipeline for Worker Microservice</h2>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-comment"># Docker</span>
<span class="hljs-comment"># Build and push an image to Azure Container Registry</span>
<span class="hljs-comment"># https://docs.microsoft.com/azure/devops/pipelines/languages/docker</span>

<span class="hljs-attr">trigger:</span>
  <span class="hljs-attr">paths:</span>
    <span class="hljs-attr">include:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">worker/*</span>

<span class="hljs-attr">resources:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">repo:</span> <span class="hljs-string">self</span>

<span class="hljs-attr">variables:</span>
  <span class="hljs-comment"># Container registry service connection established during pipeline creation</span>
  <span class="hljs-attr">dockerRegistryServiceConnection:</span> <span class="hljs-string">'XXXXXXX-XXXXXXXXXXXXXXX-XXXXXXXX'</span>
  <span class="hljs-attr">imageRepository:</span> <span class="hljs-string">'workerapplication'</span>
  <span class="hljs-attr">containerRegistry:</span> <span class="hljs-string">'votingappacr003.azurecr.io'</span>
  <span class="hljs-attr">dockerfilePath:</span> <span class="hljs-string">'$(Build.SourcesDirectory)/worker/Dockerfile'</span>
  <span class="hljs-attr">tag:</span> <span class="hljs-string">'$(Build.BuildId)'</span>

  <span class="hljs-comment"># Self-hosted agent</span>
<span class="hljs-attr">pool:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">azureagent</span>

<span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Build</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span> <span class="hljs-string">Worker</span> <span class="hljs-string">Image</span>
  <span class="hljs-attr">jobs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Build</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Image</span>
    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Docker@2</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span> <span class="hljs-string">Image</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">containerRegistry:</span> <span class="hljs-string">'$(dockerRegistryServiceConnection)'</span>
        <span class="hljs-attr">repository:</span> <span class="hljs-string">'$(imageRepository)'</span>
        <span class="hljs-attr">command:</span> <span class="hljs-string">'build'</span>
        <span class="hljs-attr">Dockerfile:</span> <span class="hljs-string">'worker/Dockerfile'</span>
        <span class="hljs-attr">tags:</span> <span class="hljs-string">'$(tag)'</span>

<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Push</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">Push</span> <span class="hljs-string">Worker</span> <span class="hljs-string">Image</span>
  <span class="hljs-attr">jobs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Push</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Push</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Image</span>
    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Docker@2</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">Push</span> <span class="hljs-string">Image</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">containerRegistry:</span> <span class="hljs-string">'$(dockerRegistryServiceConnection)'</span>
        <span class="hljs-attr">repository:</span> <span class="hljs-string">'$(imageRepository)'</span>
        <span class="hljs-attr">command:</span> <span class="hljs-string">'push'</span>
        <span class="hljs-attr">tags:</span> <span class="hljs-string">'$(tag)'</span>
</code></pre>
<h1 id="heading-docker-files-for-microservices">Docker files for Microservices</h1>
<p><strong>NOTE:</strong> These docker files are provided by the developer of the application, we have just used them here for building the application and demonstrating CI/CD.</p>
<h2 id="heading-docker-file-for-result-service">Docker file for Result Service</h2>
<p>The result application is developed on NodeJS so the following will be the docker file based on the node-18 image</p>
<pre><code class="lang-yaml"><span class="hljs-string">FROM</span> <span class="hljs-string">node:18-slim</span>

<span class="hljs-comment"># add curl for healthcheck</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">apt-get</span> <span class="hljs-string">update</span> <span class="hljs-string">&amp;&amp;</span> <span class="hljs-string">\</span>
    <span class="hljs-string">apt-get</span> <span class="hljs-string">install</span> <span class="hljs-string">-y</span> <span class="hljs-string">--no-install-recommends</span> <span class="hljs-string">curl</span> <span class="hljs-string">tini</span> <span class="hljs-string">&amp;&amp;</span> <span class="hljs-string">\</span>
    <span class="hljs-string">rm</span> <span class="hljs-string">-rf</span> <span class="hljs-string">/var/lib/apt/lists/*</span>

<span class="hljs-string">WORKDIR</span> <span class="hljs-string">/usr/local/app</span>

<span class="hljs-comment"># have nodemon available for local dev use (file watching)</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">npm</span> <span class="hljs-string">install</span> <span class="hljs-string">-g</span> <span class="hljs-string">nodemon</span>

<span class="hljs-string">COPY</span> <span class="hljs-string">package*.json</span> <span class="hljs-string">./</span>

<span class="hljs-string">RUN</span> <span class="hljs-string">npm</span> <span class="hljs-string">ci</span> <span class="hljs-string">&amp;&amp;</span> <span class="hljs-string">\</span>
 <span class="hljs-string">npm</span> <span class="hljs-string">cache</span> <span class="hljs-string">clean</span> <span class="hljs-string">--force</span> <span class="hljs-string">&amp;&amp;</span> <span class="hljs-string">\</span>
 <span class="hljs-string">mv</span> <span class="hljs-string">/usr/local/app/node_modules</span> <span class="hljs-string">/node_modules</span>

<span class="hljs-string">COPY</span> <span class="hljs-string">.</span> <span class="hljs-string">.</span>

<span class="hljs-string">ENV</span> <span class="hljs-string">PORT=80</span>
<span class="hljs-string">EXPOSE</span> <span class="hljs-number">80</span>

<span class="hljs-string">ENTRYPOINT</span> [<span class="hljs-string">"/usr/bin/tini"</span>, <span class="hljs-string">"--"</span>]
<span class="hljs-string">CMD</span> [<span class="hljs-string">"node"</span>, <span class="hljs-string">"server.js"</span>]
</code></pre>
<h2 id="heading-docker-file-for-vote-service">Docker file for Vote Service</h2>
<p>The voting service is based on Python so the base image is Python 3.11</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># base defines a base stage that uses the official python runtime base image</span>
<span class="hljs-string">FROM</span> <span class="hljs-string">python:3.11-slim</span> <span class="hljs-string">AS</span> <span class="hljs-string">base</span>

<span class="hljs-comment"># Add curl for healthcheck</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">apt-get</span> <span class="hljs-string">update</span> <span class="hljs-string">&amp;&amp;</span> <span class="hljs-string">\</span>
    <span class="hljs-string">apt-get</span> <span class="hljs-string">install</span> <span class="hljs-string">-y</span> <span class="hljs-string">--no-install-recommends</span> <span class="hljs-string">curl</span> <span class="hljs-string">&amp;&amp;</span> <span class="hljs-string">\</span>
    <span class="hljs-string">rm</span> <span class="hljs-string">-rf</span> <span class="hljs-string">/var/lib/apt/lists/*</span>

<span class="hljs-comment"># Set the application directory</span>
<span class="hljs-string">WORKDIR</span> <span class="hljs-string">/usr/local/app</span>

<span class="hljs-comment"># Install our requirements.txt</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">requirements.txt</span> <span class="hljs-string">./requirements.txt</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">pip</span> <span class="hljs-string">install</span> <span class="hljs-string">--no-cache-dir</span> <span class="hljs-string">-r</span> <span class="hljs-string">requirements.txt</span>

<span class="hljs-comment"># dev defines a stage for development, where it'll watch for filesystem changes</span>
<span class="hljs-string">FROM</span> <span class="hljs-string">base</span> <span class="hljs-string">AS</span> <span class="hljs-string">dev</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">pip</span> <span class="hljs-string">install</span> <span class="hljs-string">watchdog</span>
<span class="hljs-string">ENV</span> <span class="hljs-string">FLASK_ENV=development</span>
<span class="hljs-string">CMD</span> [<span class="hljs-string">"python"</span>, <span class="hljs-string">"app.py"</span>]

<span class="hljs-comment"># final defines the stage that will bundle the application for production</span>
<span class="hljs-string">FROM</span> <span class="hljs-string">base</span> <span class="hljs-string">AS</span> <span class="hljs-string">final</span>

<span class="hljs-comment"># Copy our code from the current folder to the working directory inside the container</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">.</span> <span class="hljs-string">.</span>

<span class="hljs-comment"># Make port 80 available for links and/or publish</span>
<span class="hljs-string">EXPOSE</span> <span class="hljs-number">80</span>

<span class="hljs-comment"># Define our command to be run when launching the container</span>
<span class="hljs-string">CMD</span> [<span class="hljs-string">"gunicorn"</span>, <span class="hljs-string">"app:app"</span>, <span class="hljs-string">"-b"</span>, <span class="hljs-string">"0.0.0.0:80"</span>, <span class="hljs-string">"--log-file"</span>, <span class="hljs-string">"-"</span>, <span class="hljs-string">"--access-logfile"</span>, <span class="hljs-string">"-"</span>, <span class="hljs-string">"--workers"</span>, <span class="hljs-string">"4"</span>, <span class="hljs-string">"--keep-alive"</span>, <span class="hljs-string">"0"</span>]
</code></pre>
<h2 id="heading-docker-file-for-worker-service">Docker file for Worker Service</h2>
<p>The worker service is based on .NET so the .NET SDK has been taken as base image from Microsoft for this application</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># because of dotnet, we always build on amd64, and target platforms in cli</span>
<span class="hljs-comment"># dotnet doesn't support QEMU for building or running. </span>
<span class="hljs-comment"># (errors common in arm/v7 32bit) https://github.com/dotnet/dotnet-docker/issues/1537</span>
<span class="hljs-comment"># https://hub.docker.com/_/microsoft-dotnet</span>
<span class="hljs-comment"># hadolint ignore=DL3029</span>
<span class="hljs-comment"># to build for a different platform than your host, use --platform=&lt;platform&gt;</span>
<span class="hljs-comment"># for example, if you were on Intel (amd64) and wanted to build for ARM, you would use:</span>
<span class="hljs-comment"># docker buildx build --platform "linux/arm64/v8" .</span>

<span class="hljs-comment"># build compiles the program for the builder's local platform</span>
<span class="hljs-string">FROM</span> <span class="hljs-string">--platform=linux</span> <span class="hljs-string">mcr.microsoft.com/dotnet/sdk:7.0</span> <span class="hljs-string">AS</span> <span class="hljs-string">build</span>
<span class="hljs-string">ARG</span> <span class="hljs-string">TARGETPLATFORM</span>
<span class="hljs-string">ARG</span> <span class="hljs-string">TARGETARCH</span>
<span class="hljs-string">ARG</span> <span class="hljs-string">BUILDPLATFORM</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">echo</span> <span class="hljs-string">"I am running on $BUILDPLATFORM, building for $TARGETPLATFORM"</span>

<span class="hljs-string">WORKDIR</span> <span class="hljs-string">/source</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">*.csproj</span> <span class="hljs-string">.</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">dotnet</span> <span class="hljs-string">restore</span>

<span class="hljs-string">COPY</span> <span class="hljs-string">.</span> <span class="hljs-string">.</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">dotnet</span> <span class="hljs-string">publish</span> <span class="hljs-string">-c</span> <span class="hljs-string">release</span> <span class="hljs-string">-o</span> <span class="hljs-string">/app</span> <span class="hljs-string">--self-contained</span> <span class="hljs-literal">false</span> <span class="hljs-string">--no-restore</span>

<span class="hljs-comment"># app image</span>
<span class="hljs-string">FROM</span> <span class="hljs-string">mcr.microsoft.com/dotnet/runtime:7.0</span>
<span class="hljs-string">WORKDIR</span> <span class="hljs-string">/app</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">--from=build</span> <span class="hljs-string">/app</span> <span class="hljs-string">.</span>
<span class="hljs-string">ENTRYPOINT</span> [<span class="hljs-string">"dotnet"</span>, <span class="hljs-string">"Worker.dll"</span>]
</code></pre>
<p>With this, the build process is concluded and the image is stored in ACR as shown below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737961509517/49611cbb-05f5-4782-992f-81c67bb9441c.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-continuous-delivery-gitops">Continuous Delivery (GitOps)</h1>
<p>The continuous delivery part will be taken care of by the tool called ArgoCD which will continuously look for the changes in <strong>Kubernetes manifests.</strong> Once any changes are found, it will deploy the new build to AKS.</p>
<p>NOTE: In the project, we have deployment files, service files, DB deployment, and DB. service files, which will be updated with the latest docker image using a shell script. This shell script will be part of our CI pipeline in Azure DevOps, which will be executed after the push stage. It will fetch the image details and append them to the Kubernetes manifest files.</p>
<h3 id="heading-why-gitops">Why GitOps</h3>
<p>GitOps takes care of continuous reconciliation, which keeps tracking the changes between the K8S cluster and K8S manifest files. We can put all the K8S-related files in GitOps and let them be monitored by GitOps so that we’ll have flawless cluster management. It also makes sure manifest drifts are automatically detected and fixed.</p>
<p>To achieve this entire delivery process we have to follow the following points:</p>
<ul>
<li><p>Create the AKS cluster and log into it</p>
</li>
<li><p>Install ArgoCD side AKS cluster</p>
</li>
<li><p>Configure ArgoCD within the K8S cluster</p>
</li>
<li><p>Prepare the shell script to update the repository with the latest image pushed to ACR</p>
</li>
</ul>
<ol>
<li><h2 id="heading-aks-cluster-creation-and-login">AKS Cluster Creation and login</h2>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738044848064/fc8ca688-7bc8-4cbd-86a4-77703426e087.png" alt class="image--center mx-auto" /></p>
<p>The cluster has been created:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738048113049/8f0935a4-6363-419d-a693-71b65da2a21d.png" alt class="image--center mx-auto" /></p>
<ol start="2">
<li><h3 id="heading-log-into-the-cluster">Log into the cluster</h3>
<pre><code class="lang-yaml"> <span class="hljs-string">az</span> <span class="hljs-string">aks</span> <span class="hljs-string">get-credentials</span> <span class="hljs-string">--resource-group</span> <span class="hljs-string">&lt;resource-group-name&gt;</span> <span class="hljs-string">--name</span> <span class="hljs-string">&lt;aks-cluster-name&gt;</span>
</code></pre>
<p> We can use the above command to log into the cluster and get access via CLI. Once done, the aks cluster will be now merged with our local machine and we can manage the cluster from local machine itself</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738048514514/c02db2cf-ba23-4da5-84e0-f0fde603460f.png" alt class="image--center mx-auto" /></p>
</li>
<li><h3 id="heading-argocd-installation-in-aks-cluster">ArgoCD Installation in AKS Cluster</h3>
<p> Installing ArgoCD is pretty straight forward just got to their <a target="_blank" href="https://argo-cd.readthedocs.io/en/stable/getting_started/">official documentation</a> or just run the below commands</p>
<p> <code>kubectl create namespace argocd</code></p>
<p> <code>kubectl apply -n argocd -f</code> <a target="_blank" href="https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml"><code>https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml</code></a></p>
</li>
</ol>
<h1 id="heading-configure-argocd">Configure ArgoCD</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738118631978/eee5e493-f4f9-4712-94a0-93e71648a67e.png" alt class="image--center mx-auto" /></p>
<p>You can post installation the argocd pods are now running.</p>
<p>Now to configure argocd we have to take the following steps:</p>
<ul>
<li><h3 id="heading-login-to-argoc">Login to argoc:</h3>
<p>  To login to argocd, first get the secret values with following commands</p>
<p>  <code>kubectl get secrets -n argocd</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738118950145/b403b9f0-4574-4f67-8ef0-5775157e262e.png" alt class="image--center mx-auto" /></p>
<p>  <code>kubectl edit secret argocd-initial-admin-secret -n argocd</code></p>
<p>  when kubectl edit command is executed it will open the secrets as follows which will be in a base64 encoded format</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738119199195/65213fa6-870d-4f20-a885-fab806df8772.png" alt class="image--center mx-auto" /></p>
<p>  And to decode this you can use the following command</p>
<p>  <code>echo &lt;secret values&gt; | base64 -d</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738119406983/18098f45-9e59-4ceb-b778-dea0d7b7295b.png" alt class="image--center mx-auto" /></p>
<p>  Now we have the admin secret value which will allow us to access the ArgoCD UI</p>
</li>
<li><h3 id="heading-access-argocd-on-browser">Access ArgoCD on Browser</h3>
<p>  Now, to access ArgoCD on the browser, we need to expose the ArgoCD in LoadBalancer mode.</p>
<p>  Get the service details:</p>
<p>  <code>kubectl get svc -n argocd</code> and below is the argocd server that we need to expose with LoadBalancer which is currently in <strong>Cluster</strong> mode</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738119709630/ff7cdcfb-31cd-46c5-a3b9-555d42cad354.png" alt class="image--center mx-auto" /></p>
<p>  Now to get it exposed on NodePort we have to edit the service file and change the service type from <strong>ClusterIP to LoadBalancer</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738119912976/8de3ff72-072c-4462-abd2-b44daffc59eb.png" alt class="image--center mx-auto" /></p>
<p>  The above type is now modified to LoadBalancer</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738121255810/83314464-c0d7-4fd5-b5d9-1a3e2c828a43.png" alt class="image--center mx-auto" /></p>
<p>  Now you can see it is changed to <strong>LoadBalancer and we can you external IP to directly access ArgoCD</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738121378203/ba44bf75-1adf-43e5-8d34-d0d74ec19a60.png" alt class="image--center mx-auto" /></p>
<p>  Now we are able to access the ArgoCD</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738121352107/568943f2-fabe-48bb-9059-3fe0640fbf69.png" alt class="image--center mx-auto" /></p>
<p>  We can log in with the username as <strong>admin</strong> and password as the decoded secret.</p>
</li>
</ul>
<h2 id="heading-connecting-argocd-to-azure-repo">Connecting ArgoCD to Azure Repo</h2>
<p>ArgoCD needs to have access to the Azure Repos where the K8S manifests files are present then only, it will be able to access the manifests. To establish the connection we need to get the access token with only <strong>Read Permission</strong> as the ArgoCD only needs to read the manifests and I already have an access token ready with me that I can reuse.</p>
<p>Once the token is ready, move to <strong>Settings —&gt; Repository</strong> in ArgoCD and click on Connect Repo</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738122156073/c939b288-d384-4f64-adbb-fe1e95e9e23f.png" alt class="image--center mx-auto" /></p>
<p>Fill in the necessary repository details to clone the repository. Here I have replaced <strong>Azure DevOps Organization Name</strong> with the <strong>Access Token</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738122444797/15954eb9-e950-437d-97dd-928dd8707811.png" alt class="image--center mx-auto" /></p>
<p>And now, ArgoCD is connected to my Azure Repo</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738122552953/b429c583-6740-46ec-8ba2-2790ae1a7328.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-adding-manifests-to-argocd">Adding Manifests to ArgoCD</h3>
<p>Now to let the ArgoCD know where to pick the deployment and service manifests click on <strong>Applications —&gt; Create Application</strong> and follow the pages</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738123141967/9d3f6b9d-92fc-462f-b5db-61828fb7a159.png" alt class="image--center mx-auto" /></p>
<p>Below is the important section where we are letting argoCD know to which directory to look for changes inside the repository in our case it’s K8S Specification as shown below and click on Create.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738123260611/5d20a608-259a-4543-b9f6-2cc18659aace.png" alt class="image--center mx-auto" /></p>
<p>Now finally ArgoCD is configured successfully and it is has synced the deployment files</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738123599971/d985fe08-9f1e-4de5-9c71-5a9b51e8c025.png" alt class="image--center mx-auto" /></p>
<p>From now on, if there are any changes made to K8S Specification directory, ArgoCD will detect the changes and update the K8S cluster.</p>
<h2 id="heading-establish-connectivity-between-deployment-manifests-and-acr">Establish Connectivity between Deployment manifests and ACR</h2>
<p>This step is necessary because K8S deployment should be able to fetch the image from our <strong>Private Registry.</strong> Below is a kubectl command which will create a secret in Kubernetes that we can refer to in you deployment manifests</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">create</span> <span class="hljs-string">secret</span> <span class="hljs-string">docker-registry</span> <span class="hljs-string">&lt;secret</span> <span class="hljs-string">name&gt;</span> <span class="hljs-string">\</span>
    <span class="hljs-string">--namespace</span> <span class="hljs-string">default</span> <span class="hljs-string">\</span>
    <span class="hljs-string">--docker-server=&lt;RegistryName&gt;.azurecr.io</span> <span class="hljs-string">\</span>
    <span class="hljs-string">--docker-username=&lt;User</span> <span class="hljs-string">name&gt;</span> <span class="hljs-string">\</span>
    <span class="hljs-string">--docker-password=&lt;Registry</span> <span class="hljs-string">Password&gt;</span>
</code></pre>
<p>Below is how the secret is referred to in deployment manifests</p>
<pre><code class="lang-yaml">    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">votingappacr001/votingapplication:89</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">vote</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">vote</span>
      <span class="hljs-attr">ImagePullSecrets:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">acrsecret</span> <span class="hljs-comment"># this is the secret name which is created above</span>
</code></pre>
<h2 id="heading-integrating-azure-devops-for-auto-updating-k8s-manifests">Integrating Azure DevOps for Auto-updating K8S Manifests</h2>
<p>We’ll be adding another stage to the Azure CI Pipeline to achieve this which will execute a shell script to further update the deployment files. This stage will be created in all 3 pipelines for respective services</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Update</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">Update</span> <span class="hljs-string">vote</span> <span class="hljs-string">app</span> <span class="hljs-string">Deployment</span> <span class="hljs-string">Manifests</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Update</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">Update</span> <span class="hljs-string">vote</span> <span class="hljs-string">app</span> <span class="hljs-string">Deployment</span> <span class="hljs-string">Manifests</span>
      <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">ShellScript@2</span>
        <span class="hljs-attr">inputs:</span>
          <span class="hljs-attr">scriptPath:</span> <span class="hljs-string">'scripts/updatek8manifests.sh'</span>
          <span class="hljs-attr">args:</span> <span class="hljs-string">'vote $(imageRepository) $(tag)'</span>
        <span class="hljs-attr">displayName:</span> <span class="hljs-string">Update</span> <span class="hljs-string">vote</span> <span class="hljs-string">app</span> <span class="hljs-string">Deployment</span> <span class="hljs-string">Manifests</span>
</code></pre>
<h2 id="heading-k8s-deployment-manifest-update-script">K8S Deployment Manifest update Script</h2>
<pre><code class="lang-yaml"><span class="hljs-comment">#!/bin/bash</span>

<span class="hljs-string">set</span> <span class="hljs-string">-x</span>

<span class="hljs-comment"># Set the repository URL</span>
<span class="hljs-string">REPO_URL="https://azuredevopsprep@dev.azure.com/az-devops-prep/voting-application/_git/voting-application"</span>

<span class="hljs-comment"># Clone the git repository into the /tmp directory</span>
<span class="hljs-string">git</span> <span class="hljs-string">clone</span> <span class="hljs-string">"$REPO_URL"</span> <span class="hljs-string">/tmp/temp_repo</span>

<span class="hljs-comment"># Navigate into the cloned repository directory</span>
<span class="hljs-string">cd</span> <span class="hljs-string">/tmp/temp_repo</span>

<span class="hljs-comment"># Make changes to the Kubernetes manifest file(s)</span>
<span class="hljs-comment"># For example, let's say you want to change the image tag in a deployment.yaml file</span>
<span class="hljs-string">sed</span> <span class="hljs-string">-i</span> <span class="hljs-string">"s|image:.*|image: votingappacr001.azurecr.io/$2:$3|g"</span> <span class="hljs-string">k8s-specifications/$1-deployment.yaml</span>

<span class="hljs-comment"># Add the modified files</span>
<span class="hljs-string">git</span> <span class="hljs-string">add</span> <span class="hljs-string">.</span>

<span class="hljs-comment"># Commit the changes</span>
<span class="hljs-string">git</span> <span class="hljs-string">commit</span> <span class="hljs-string">-m</span> <span class="hljs-string">"Update Kubernetes manifest"</span>

<span class="hljs-comment"># Push the changes back to the repository</span>
<span class="hljs-string">git</span> <span class="hljs-string">push</span>

<span class="hljs-comment"># Cleanup: remove the temporary directory</span>
<span class="hljs-string">rm</span> <span class="hljs-string">-rf</span> <span class="hljs-string">/tmp/temp_repo</span>
</code></pre>
<p>Here <strong>$1, $2, and $3</strong> represent the arguments passed in the Update stage in the pipeline i.e. <code>args: 'vote $(imageRepository) $(tag)'</code> respectively.</p>
<p>This script will update the deployment manifests with the latest image tags after the image is pushed to ACR and when the change is detected by ArgoCD, it will re-deploy the pods to containers with the latest image.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>With this, we have implemented <strong>Continuous Integration and Continuous Delivery</strong> for 3 microservices along with GitOps implementation via ArgoCD</p>
]]></content:encoded></item><item><title><![CDATA[Improving Infrastructure as Code (IAC) Using DevOps and CI/CD | Multi-Environment Deployment]]></title><description><![CDATA[In this project, we’ll leverage the Azure DevOps pipeline for automating the infrastructure deployment. We are also going to create multiple environment such as Dev, QA, Staging, and Prod having identical resources in each environment via Terraform.
...]]></description><link>https://www.devopswithritesh.in/improving-infrastructure-as-code-iac-using-devops-and-cicd-multi-environment-deployment</link><guid isPermaLink="true">https://www.devopswithritesh.in/improving-infrastructure-as-code-iac-using-devops-and-cicd-multi-environment-deployment</guid><category><![CDATA[Azure]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[TerraformwithAzure]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[Azure Pipelines]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Infrastructure management]]></category><category><![CDATA[#Iac #terraform #devops #aws]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[IaC (Infrastructure as Code)]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Fri, 27 Dec 2024 10:02:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735293593167/ce7ba03a-6145-40f3-a54b-d457614fdd8c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this project, we’ll leverage the Azure DevOps pipeline for automating the infrastructure deployment. We are also going to create multiple environment such as Dev, QA, Staging, and Prod having identical resources in each environment via Terraform.</p>
<h1 id="heading-creating-service-connection">Creating Service Connection</h1>
<p>This build and release pipeline will plan and apply Terraform manifests, as part of which it will generate the state files and create resources that are on Azure. To enable this communication between the pipeline and Azure Cloud, we need to establish a Service Connection which can be done using the following steps.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734623792178/75a54998-c4bc-43ec-8a07-218265074018.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-ci-build-pipeline">CI-Build Pipeline</h1>
<h2 id="heading-task-1">Task 1</h2>
<p>In this step, the Terraform manifests, which are essential for infrastructure provisioning, are copied from the system's default directory to the build artifact directory. This ensures that the configuration files are organized and accessible for downstream pipeline stages.</p>
<p>The Continuous Integration (CI) pipeline is a critical part of the DevOps lifecycle, enabling seamless integration and testing of code changes. In this pipeline, we handle the preparation of Terraform manifests for deployment and ensure they are readily available for subsequent release pipelines.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Starter pipeline</span>
<span class="hljs-comment"># Start with a minimal pipeline that you can customize to build and deploy your code.</span>
<span class="hljs-comment"># Add steps that build, run tests, deploy, and more:</span>
<span class="hljs-comment"># https://aka.ms/yaml</span>

<span class="hljs-attr">trigger:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">main</span>

<span class="hljs-attr">pool:</span> <span class="hljs-string">Default</span>
  <span class="hljs-comment">#vmImage: ubuntu-latest</span>
<span class="hljs-attr">stages:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">GetTerraformManifests</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span> <span class="hljs-string">Stage</span>
    <span class="hljs-attr">jobs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">FetchTerraformManifests</span>
        <span class="hljs-attr">steps:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">bash:</span> <span class="hljs-string">echo</span> <span class="hljs-string">"contents in working directoy"</span><span class="hljs-string">;</span> <span class="hljs-string">ls</span> <span class="hljs-string">-lrth</span> <span class="hljs-string">$(System.DefaultWorkingDirectory)</span>

          <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">CopyFiles@2</span>
            <span class="hljs-attr">inputs:</span>          <span class="hljs-comment">#/home/ubuntu/myagent/_work/1/s/16-Azure-IAC-DevOps/terraform-manifests</span>
              <span class="hljs-attr">SourceFolder:</span> <span class="hljs-string">'$(System.DefaultWorkingDirectory)/16-Azure-IAC-DevOps/'</span>
              <span class="hljs-attr">Contents:</span> <span class="hljs-string">'**'</span>
              <span class="hljs-attr">TargetFolder:</span> <span class="hljs-string">'$(System.DefaultWorkingDirectory)'</span>

          <span class="hljs-bullet">-</span> <span class="hljs-attr">bash:</span> <span class="hljs-string">echo</span> <span class="hljs-string">"contents in working directoy"</span><span class="hljs-string">;</span> <span class="hljs-string">ls</span> <span class="hljs-string">-lrth</span> <span class="hljs-string">$(System.DefaultWorkingDirectory)</span>
            <span class="hljs-attr">displayName:</span> <span class="hljs-string">List</span> <span class="hljs-string">Contents</span> <span class="hljs-string">post</span> <span class="hljs-string">copying</span>

          <span class="hljs-comment"># Copy Terraform files to the Artifact Staging Directory</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">CopyFiles@2</span>
            <span class="hljs-attr">displayName:</span> <span class="hljs-string">Copy</span> <span class="hljs-string">Terraform</span> <span class="hljs-string">Manifests</span> <span class="hljs-string">to</span> <span class="hljs-string">Staging</span> <span class="hljs-string">Directory</span>
            <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">SourceFolder:</span> <span class="hljs-string">'$(System.DefaultWorkingDirectory)/terraform-manifests'</span>
              <span class="hljs-attr">Contents:</span> <span class="hljs-string">'**/*'</span>  <span class="hljs-comment"># Copy all files and subdirectories</span>
              <span class="hljs-attr">TargetFolder:</span> <span class="hljs-string">'$(Build.ArtifactStagingDirectory)'</span>

          <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">PublishBuildArtifacts@1</span>
            <span class="hljs-attr">displayName:</span> <span class="hljs-string">Publish</span> <span class="hljs-string">Manifests</span> <span class="hljs-string">to</span> <span class="hljs-string">Release</span> <span class="hljs-string">pipeline</span>
            <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">PathtoPublish:</span> <span class="hljs-string">'$(Build.ArtifactStagingDirectory)'</span>
              <span class="hljs-attr">ArtifactName:</span> <span class="hljs-string">'terraform-manifests'</span>
              <span class="hljs-attr">publishLocation:</span> <span class="hljs-string">'Container'</span>
</code></pre>
<p>This pipeline is a foundational example that demonstrates how to fetch, process, and publish Terraform manifests for use in subsequent stages like deployment. Below is a detailed breakdown of each section of the YAML pipeline.</p>
<hr />
<h4 id="heading-trigger-section"><strong>Trigger Section</strong></h4>
<pre><code class="lang-yaml"><span class="hljs-attr">trigger:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
</code></pre>
<ul>
<li><strong>Purpose:</strong><br />  The pipeline is configured to trigger automatically whenever there is a commit to the <strong>main</strong> branch.</li>
</ul>
<hr />
<h4 id="heading-pool-section"><strong>Pool Section</strong></h4>
<pre><code class="lang-yaml"><span class="hljs-attr">pool:</span> <span class="hljs-string">Default</span>
</code></pre>
<ul>
<li><strong>Purpose:</strong><br />  The pipeline runs on the default agent pool. I have added my self-hosted agent to the default pool.</li>
</ul>
<hr />
<h4 id="heading-stages"><strong>Stages</strong></h4>
<p>The pipeline contains a single stage, <code>GetTerraformManifests</code>, designed to build and prepare Terraform manifests.</p>
<hr />
<h5 id="heading-stage-getterraformmanifests"><strong>Stage: GetTerraformManifests</strong></h5>
<pre><code class="lang-yaml"><span class="hljs-attr">stages:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">GetTerraformManifests</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span> <span class="hljs-string">Stage</span>
</code></pre>
<ul>
<li><strong>Purpose:</strong><br />  The primary stage to gather and publish Terraform manifests required for deployment.</li>
</ul>
<hr />
<h5 id="heading-job-fetchterraformmanifests"><strong>Job: FetchTerraformManifests</strong></h5>
<pre><code class="lang-yaml"><span class="hljs-attr">jobs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">FetchTerraformManifests</span>
</code></pre>
<ul>
<li><strong>Purpose:</strong><br />  A single job under the stage that defines the steps to fetch and publish the Terraform manifests.</li>
</ul>
<hr />
<h4 id="heading-steps-breakdown"><strong>Steps Breakdown</strong></h4>
<ol>
<li><p><strong>List Initial Directory Contents</strong></p>
<pre><code class="lang-yaml"> <span class="hljs-bullet">-</span> <span class="hljs-attr">bash:</span> <span class="hljs-string">echo</span> <span class="hljs-string">"contents in working directory"</span><span class="hljs-string">;</span> <span class="hljs-string">ls</span> <span class="hljs-string">-lrth</span> <span class="hljs-string">$(System.DefaultWorkingDirectory)</span>
</code></pre>
<ul>
<li><strong>Explanation:</strong><br />  Outputs the contents of the working directory before any processing. This helps to verify the initial state and debug potential issues.</li>
</ul>
</li>
</ol>
<hr />
<ol start="2">
<li><p><strong>Copy Terraform Manifests from Source to Working Directory</strong></p>
<pre><code class="lang-yaml"> <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">CopyFiles@2</span>
   <span class="hljs-attr">inputs:</span>
     <span class="hljs-attr">SourceFolder:</span> <span class="hljs-string">'$(System.DefaultWorkingDirectory)/16-Azure-IAC-DevOps/'</span>
     <span class="hljs-attr">Contents:</span> <span class="hljs-string">'**'</span>
     <span class="hljs-attr">TargetFolder:</span> <span class="hljs-string">'$(System.DefaultWorkingDirectory)'</span>
</code></pre>
<ul>
<li><p><strong>Purpose:</strong></p>
<ul>
<li><p>Copies all files from the specified source folder (<code>16-Azure-IAC-DevOps</code>) to the working directory.</p>
</li>
<li><p>Ensures Terraform files are available for further processing.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<hr />
<ol start="3">
<li><p><strong>List Directory Contents Post-Copying</strong></p>
<pre><code class="lang-yaml"> <span class="hljs-bullet">-</span> <span class="hljs-attr">bash:</span> <span class="hljs-string">echo</span> <span class="hljs-string">"contents in working directory"</span><span class="hljs-string">;</span> <span class="hljs-string">ls</span> <span class="hljs-string">-lrth</span> <span class="hljs-string">$(System.DefaultWorkingDirectory)</span>
   <span class="hljs-attr">displayName:</span> <span class="hljs-string">List</span> <span class="hljs-string">Contents</span> <span class="hljs-string">post</span> <span class="hljs-string">copying</span>
</code></pre>
<ul>
<li><strong>Purpose:</strong><br />  Outputs the contents of the directory after copying to verify the successful transfer of files.</li>
</ul>
</li>
</ol>
<hr />
<ol start="4">
<li><p><strong>Copy Terraform Files to Staging Directory</strong></p>
<pre><code class="lang-yaml"> <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">CopyFiles@2</span>
   <span class="hljs-attr">displayName:</span> <span class="hljs-string">Copy</span> <span class="hljs-string">Terraform</span> <span class="hljs-string">Manifests</span> <span class="hljs-string">to</span> <span class="hljs-string">Staging</span> <span class="hljs-string">Directory</span>
   <span class="hljs-attr">inputs:</span>
     <span class="hljs-attr">SourceFolder:</span> <span class="hljs-string">'$(System.DefaultWorkingDirectory)/terraform-manifests'</span>
     <span class="hljs-attr">Contents:</span> <span class="hljs-string">'**/*'</span>  <span class="hljs-comment"># Copy all files and subdirectories</span>
     <span class="hljs-attr">TargetFolder:</span> <span class="hljs-string">'$(Build.ArtifactStagingDirectory)'</span>
</code></pre>
<ul>
<li><p><strong>Purpose:</strong></p>
<ul>
<li><p>Moves all Terraform manifests to the <code>Artifact Staging Directory</code>.</p>
</li>
<li><p>Prepares the files for publishing as build artifacts.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<hr />
<ol start="5">
<li><p><strong>Publish Terraform Manifests as Build Artifacts</strong></p>
<pre><code class="lang-yaml"> <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">PublishBuildArtifacts@1</span>
   <span class="hljs-attr">displayName:</span> <span class="hljs-string">Publish</span> <span class="hljs-string">Manifests</span> <span class="hljs-string">to</span> <span class="hljs-string">Release</span> <span class="hljs-string">pipeline</span>
   <span class="hljs-attr">inputs:</span>
     <span class="hljs-attr">PathtoPublish:</span> <span class="hljs-string">'$(Build.ArtifactStagingDirectory)'</span>
     <span class="hljs-attr">ArtifactName:</span> <span class="hljs-string">'terraform-manifests'</span>
     <span class="hljs-attr">publishLocation:</span> <span class="hljs-string">'Container'</span>
</code></pre>
<ul>
<li><p><strong>Purpose:</strong></p>
<ul>
<li><p>Publishes the Terraform manifests from the staging directory as build artifacts.</p>
</li>
<li><p>The artifact is named <code>terraform-manifests</code> and is made available in the container.</p>
</li>
<li><p>These artifacts can be used in subsequent <strong>Release Pipelines</strong> for deploying infrastructure.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<hr />
<h3 id="heading-key-benefits-of-this-pipeline"><strong>Key Benefits of This Pipeline</strong></h3>
<ol>
<li><p><strong>Streamlined Artifact Management:</strong><br /> Automates the process of gathering and preparing Terraform manifests for deployment.</p>
</li>
<li><p><strong>Modularity:</strong><br /> Artifacts published here can be reused across multiple release pipelines, enabling consistent and efficient deployment workflows.</p>
</li>
<li><p><strong>Traceability:</strong><br /> Each step is logged and auditable, ensuring that the process is transparent and easy to troubleshoot.</p>
</li>
<li><p><strong>Scalability:</strong><br /> Provides a base pipeline that can be extended to include additional stages, such as testing or multi-environment deployments.</p>
</li>
</ol>
<p>This pipeline is a crucial first step in integrating Terraform workflows into your CI/CD processes, ensuring that the infrastructure-as-code artifacts are always ready for deployment.</p>
<h1 id="heading-release-pipeline">Release Pipeline</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734490288202/0721b925-4847-4864-b13f-c0bcbd73f665.png" alt class="image--center mx-auto" /></p>
<p>The release pipeline leverages the artifacts created during the CI process to deploy identical resources in multiple environments—<strong>Dev</strong>, <strong>QA</strong>, <strong>Staging</strong>, and <strong>Production</strong>. Each environment is isolated and configured with unique settings to reflect the appropriate stage of the application lifecycle.</p>
<p><strong>NOTE:</strong> by default, the creation of a <strong>Release Pipeline</strong> is disabled as it is considered as legacy. To create the release pipeline you have to</p>
<ol>
<li><p>Go to your <strong>Project Settings</strong>.</p>
</li>
<li><p>Under the <strong>Pipelines</strong> section, select <strong>Settings</strong>.</p>
</li>
<li><p>Scroll down to the <strong>Classic Release Pipelines</strong> option and enable it.</p>
</li>
</ol>
<h2 id="heading-configure-artifact-source">Configure Artifact Source</h2>
<p>In the release pipeline, we have first to configure the source of artifacts i.e. from our build pipeline</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734762720806/92757741-b899-4eb9-b547-3e4b17288f5e.png" alt class="image--center mx-auto" /></p>
<p>Then enable the continuous deployment trigger as shown below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734763014385/2d6929a3-f502-4ab7-81be-2f9356f7f6ec.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-dev-environment">Dev Environment</h2>
<h3 id="heading-release-pipeline-for-dev">Release Pipeline for Dev</h3>
<ol>
<li>Configure the agent job where you define the pool and other configurations</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734763675999/6b0ed081-2406-41e1-a80b-188a12640f9f.png" alt class="image--center mx-auto" /></p>
<ol start="2">
<li><p>Terraform Installation Task where you define the supported terraform version that needs to be installed in your server and perform the execution</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734765311940/51f6d757-fb51-40e7-aae3-1659774d233d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Terraform Init task</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734765431178/2cf6ebcb-3de7-4627-b9b5-18fee91b4b06.png" alt class="image--center mx-auto" /></p>
<p> Here we configure the backend which eliminates the need to explicitly define details in the terraform configuration ( versions.tf ) file.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734765493889/a2bba903-a0d6-4ece-904e-7b2ce1949b39.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Terraform Validate Task</strong></p>
<p> This task will validate the terraform configuration present inside the path given in the configuration directory</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735181168111/c66edbe6-1842-41a2-b91b-dbc6b2280231.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Terraform Plan Task</strong></p>
<p> This task will run the terraform plan command and display the resource creation plan for the same <strong>dev</strong> environment. Here along with <code>terraform plan</code>, we are providing an additional argument as <code>-var-file=dev.tfvars</code> which will utilize the variables dedicated to the dev environment for resource creation</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735181568458/62c5e902-87e5-4b61-a846-156a5036f52f.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p><strong>Terraform Apply Task</strong></p>
<p>This task will run the terraform apply command and perform the resource creation in the <strong>dev</strong> environment. Here along with <code>terraform apply</code>, we are providing an additional argument as <code>-var-file=dev.tfvars</code> &amp; <code>-auto-approve</code> which will utilize the variables dedicated to the dev environment for resource creation without asking for manual intervention</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735182053301/db1d5d48-4bdf-4afb-abac-7456efe49892.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-dev-environment-completion">Dev Environment Completion</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735286402151/5ae7489d-37d6-4abb-a8da-66682c6b60b8.png" alt class="image--center mx-auto" /></p>
<p>With this the dev deployment has been completed</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735286685018/7d1d6d76-d976-412f-bc0b-bb7a9f72d8ca.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-terraform-manifest-modification-for-dev-environment">Terraform manifest Modification for Dev Environment</h3>
<p>The Dev environment serves as the first stage for deploying and testing infrastructure. The configuration (<code>tfvars</code>) for the Dev environment includes:</p>
<ul>
<li><p><strong>Virtual Network (VNet):</strong> <code>10.1.0.0/16</code></p>
</li>
<li><p><strong>Subnets:</strong></p>
<ul>
<li><p>Web Subnet: <code>10.1.1.0/24</code></p>
</li>
<li><p>App Subnet: <code>10.1.11.0/24</code></p>
</li>
<li><p>DB Subnet: <code>10.1.21.0/24</code></p>
</li>
<li><p>Bastion Subnet: <code>10.1.100.0/24</code></p>
</li>
</ul>
</li>
</ul>
<p>This environment is automatically deployed without manual intervention.</p>
<h3 id="heading-tfvars-for-dev-environment">tfvars for Dev environment</h3>
<pre><code class="lang-yaml"><span class="hljs-string">environment</span> <span class="hljs-string">=</span> <span class="hljs-string">"dev"</span>
<span class="hljs-string">vnet_address_space</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.1.0.0/16"</span>]

<span class="hljs-string">web_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"websubnet"</span>
<span class="hljs-string">web_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.1.1.0/24"</span>]

<span class="hljs-string">app_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"appsubnet"</span>
<span class="hljs-string">app_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.1.11.0/24"</span>]

<span class="hljs-string">db_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"dbsubnet"</span>
<span class="hljs-string">db_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.1.21.0/24"</span>]

<span class="hljs-string">bastion_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"bastionsubnet"</span>
<span class="hljs-string">bastion_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.1.100.0/24"</span>]
</code></pre>
<h2 id="heading-qa-environment">QA Environment</h2>
<h3 id="heading-release-pipeline-for-qa-environment-with-pre-deployment-approval">Release Pipeline for QA Environment with Pre-Deployment Approval</h3>
<p>The QA environment introduces <strong>Pre-Deployment Approvals</strong>, ensuring changes are reviewed before deployment. This step adds an extra layer of validation to maintain infrastructure consistency.</p>
<p>Most of the pipeline configuration will remain same with minor modifications, moving forward we’ll highlight only modified configurations for QA environment here. As most of the configuration will remain unchanged we can directly clone the dev stage by <strong>clicking on the clone</strong> has highlighteded below and then make the necessary modification to it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735287349441/26f05549-21af-4a59-82d1-e3d168ff5fcd.png" alt class="image--center mx-auto" /></p>
<p>Once cloned we can make the necessary modifications as shown</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735287559184/8d16baef-5c59-424e-9198-ccf87e3f5bd3.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Configure Pre-Deployment Approver</strong></p>
<p> Here in QA deployment, we are required to verify the deployment before proceeding for which setting up the deployment approval is necessary and that can be configured as shown below</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735288004379/e66850c6-6536-464d-a8a8-41f97025274e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Terraform Init Task - QA</strong></p>
<p> In the QA environment, the <code>terraform init</code> command must be configured with a backend block pointing to the QA-specific container key.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735288459174/aa0d39ee-e9d2-444b-a304-a0453a589994.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Terraform Plan Task - QA</strong></p>
<p> There will be no change to the validation task. At the plan task the terraform plan command has to proceed with dev.tfvars file which requires the following change and then only the command will act as <code>terraform plan -var-file=qa.tfvars</code></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735288777397/9189f26a-3d48-4c07-a50d-cf83f6bf7eed.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Terraform Apply Task - QA</strong></p>
<p> Similarly, for the terraform apply task the <code>qa.tfvars</code> var file has to be picked up as shown below</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735289056809/a2508610-5fc6-4bb1-85c8-a9c7610690da.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-qa-environment-completion">QA Environment Completion</h3>
<p>As per the configuration, the QA deployment is now pending approval and once it is approved, it will deploy the identical environment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735290777846/8343c142-73b8-4eea-b18a-c562278926df.png" alt class="image--center mx-auto" /></p>
<p>Once approved it has now started the deployment</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735290939973/418367cc-400d-41c2-92c8-875668d25e57.png" alt class="image--center mx-auto" /></p>
<p>QA Deployment is now completed with all respective resources</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735291158797/45fea3a1-5a6f-4864-9191-33e893cd4bae.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735292781480/c41a7762-c66e-43c6-bf8f-23de6b59e195.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-terraform-manifest-modification-for-qa-environment">Terraform Manifest Modification for QA Environment</h3>
<p>Configuration for QA includes:</p>
<ul>
<li><p><strong>Virtual Network (VNet):</strong> <code>10.2.0.0/16</code></p>
</li>
<li><p><strong>Subnets:</strong></p>
<ul>
<li><p>Web Subnet: <code>10.2.1.0/24</code></p>
</li>
<li><p>App Subnet: <code>10.2.11.0/24</code></p>
</li>
<li><p>DB Subnet: <code>10.2.21.0/24</code></p>
</li>
<li><p>Bastion Subnet: <code>10.2.100.0/24</code></p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-tfvars-for-qa-environment">tfvars for QA environment</h3>
<pre><code class="lang-yaml"><span class="hljs-string">environment</span> <span class="hljs-string">=</span> <span class="hljs-string">"qa"</span>

<span class="hljs-string">vnet_address_space</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.2.0.0/16"</span>]

<span class="hljs-string">web_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"websubnet"</span>
<span class="hljs-string">web_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.2.1.0/24"</span>]

<span class="hljs-string">app_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"appsubnet"</span>
<span class="hljs-string">app_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.2.11.0/24"</span>]

<span class="hljs-string">db_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"dbsubnet"</span>
<span class="hljs-string">db_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.2.21.0/24"</span>]

<span class="hljs-string">bastion_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"bastionsubnet"</span>
<span class="hljs-string">bastion_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.2.100.0/24"</span>]
</code></pre>
<h2 id="heading-staging-environment-with-pre-amp-post-deployment-approval">Staging Environment with Pre &amp; Post Deployment Approval</h2>
<p>Just like the QA environment similar modification to each task also has to be done</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735290382737/ddaf2efd-0fea-4d13-804c-2c9046089985.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-staging-environment-completion">Staging Environment Completion</h3>
<p>For staging as well the deployment is completed post approval</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735293062279/4820bc67-5c21-4e3b-a341-412b7159a768.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735293101102/717627fb-c609-4b50-aa45-9d470b7d6fac.png" alt class="image--center mx-auto" /></p>
<p>The Staging environment represents a near-production setup and includes both <strong>Pre-Deployment</strong> and <strong>Post-Deployment Approvals</strong>. Pre-deployment approval ensures changes are authorized before execution, while post-deployment approval validates successful deployment and functionality.</p>
<p>Configuration for Staging includes:</p>
<ul>
<li><p><strong>Virtual Network (VNet):</strong> <code>10.3.0.0/16</code></p>
</li>
<li><p><strong>Subnets:</strong></p>
<ul>
<li><p>Web Subnet: <code>10.3.1.0/24</code></p>
</li>
<li><p>App Subnet: <code>10.3.11.0/24</code></p>
</li>
<li><p>DB Subnet: <code>10.3.21.0/24</code></p>
</li>
<li><p>Bastion Subnet: <code>10.3.100.0/24</code></p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-tfvars-for-staging-environment">tfvars for Staging environment</h3>
<pre><code class="lang-yaml"><span class="hljs-string">environment</span> <span class="hljs-string">=</span> <span class="hljs-string">"staging"</span>

<span class="hljs-string">vnet_address_space</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.3.0.0/16"</span>]

<span class="hljs-string">web_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"websubnet"</span>
<span class="hljs-string">web_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.3.1.0/24"</span>]

<span class="hljs-string">app_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"appsubnet"</span>
<span class="hljs-string">app_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.3.11.0/24"</span>]

<span class="hljs-string">db_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"dbsubnet"</span>
<span class="hljs-string">db_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.3.21.0/24"</span>]

<span class="hljs-string">bastion_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"bastionsubnet"</span>
<span class="hljs-string">bastion_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.3.100.0/24"</span>]
</code></pre>
<h2 id="heading-prod-environment-with-pre-deployment-approval">Prod Environment with Pre-Deployment Approval</h2>
<p>Like the Stage environment, similar changes must also be made for Prod.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735290482230/bcb7ae09-ad70-4a72-b15d-01879c4a96df.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-prod-deployment-completion">Prod Deployment Completion</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735293234465/f863cb0d-5de7-4c96-8ed4-b49b45474e12.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735293265964/a7f2fd18-07b3-42f9-b2e4-5f32baf46f50.png" alt class="image--center mx-auto" /></p>
<p>The Production (Prod) environment is the final deployment stage, ensuring the infrastructure is ready for live traffic. This environment requires <strong>Pre-Deployment Approvals</strong> to prevent unintended changes and maintain high reliability.</p>
<p>Configuration for Prod includes:</p>
<ul>
<li><p><strong>Virtual Network (VNet):</strong> <code>10.4.0.0/16</code></p>
</li>
<li><p><strong>Subnets:</strong></p>
<ul>
<li><p>Web Subnet: <code>10.4.1.0/24</code></p>
</li>
<li><p>App Subnet: <code>10.4.11.0/24</code></p>
</li>
<li><p>DB Subnet: <code>10.4.21.0/24</code></p>
</li>
<li><p>Bastion Subnet: <code>10.4.100.0/24</code></p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-tfvars-for-prod-environment">tfvars for Prod Environment</h3>
<pre><code class="lang-yaml"><span class="hljs-string">environment</span> <span class="hljs-string">=</span> <span class="hljs-string">"prod"</span>

<span class="hljs-string">vnet_address_space</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.4.0.0/16"</span>]

<span class="hljs-string">web_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"websubnet"</span>
<span class="hljs-string">web_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.4.1.0/24"</span>]

<span class="hljs-string">app_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"appsubnet"</span>
<span class="hljs-string">app_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.4.11.0/24"</span>]

<span class="hljs-string">db_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"dbsubnet"</span>
<span class="hljs-string">db_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.4.21.0/24"</span>]

<span class="hljs-string">bastion_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"bastionsubnet"</span>
<span class="hljs-string">bastion_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.4.100.0/24"</span>]
</code></pre>
<h1 id="heading-remote-backend">Remote Backend</h1>
<p>To ensure environment-specific isolation and consistency in Terraform state management, we configure a separate state storage container for each environment. This approach allows for safe, concurrent deployments while avoiding conflicts in state files. Here's how the Terraform settings are structured:</p>
<pre><code class="lang-yaml"><span class="hljs-string">terraform</span> {
  <span class="hljs-string">required_version</span> <span class="hljs-string">=</span> <span class="hljs-string">"~&gt;1.5.6"</span> <span class="hljs-comment"># Minor version upgrades are allowed</span>
  <span class="hljs-string">required_providers</span> {
    <span class="hljs-string">azurerm</span> <span class="hljs-string">=</span> {
      <span class="hljs-string">source</span>  <span class="hljs-string">=</span> <span class="hljs-string">"hashicorp/azurerm"</span>
      <span class="hljs-string">version</span> <span class="hljs-string">=</span> <span class="hljs-string">"~&gt;4.3.0"</span>
    }

    <span class="hljs-string">random</span> <span class="hljs-string">=</span> {
      <span class="hljs-string">source</span>  <span class="hljs-string">=</span> <span class="hljs-string">"hashicorp/random"</span>
      <span class="hljs-string">version</span> <span class="hljs-string">=</span> <span class="hljs-string">"&gt;=3.6.0"</span>
    }
  }
  <span class="hljs-comment"># Nothing should be configured here in backend block as the detils will be passed via Azure DevOps pipeline</span>
  <span class="hljs-string">backend</span> <span class="hljs-string">"azurerm"</span> {

  }

}

<span class="hljs-string">provider</span> <span class="hljs-string">"azurerm"</span> {
  <span class="hljs-string">features</span> {}
  <span class="hljs-string">subscription_id</span> <span class="hljs-string">=</span> <span class="hljs-string">"XXXXXX-9342-XXXXXXX-XXXXXXXX"</span>
}
</code></pre>
<h4 id="heading-terraform-block"><strong>Terraform Block</strong></h4>
<p>The <code>terraform</code> block defines the required Terraform version, providers, and the backend configuration:</p>
<ul>
<li><p><strong>Required Version:</strong> Locked to <code>~&gt;1.5.6</code> allowing minor upgrades.</p>
</li>
<li><p><strong>Providers:</strong></p>
<ul>
<li><p><code>azurerm</code>: Azure Resource Manager provider, pinned to <code>~&gt;4.3.0</code> for stability.</p>
</li>
<li><p><code>random</code>: Used for generating random values, with a flexible version constraint of <code>&gt;=3.6.0</code>.</p>
</li>
</ul>
</li>
<li><p><strong>Backend Block:</strong><br />  The backend block is intentionally left blank as the configuration will be dynamically provided through the Azure DevOps pipeline, ensuring secure and consistent handling of state files across environments.</p>
</li>
</ul>
<h4 id="heading-provider-configuration"><strong>Provider Configuration</strong></h4>
<p>The <code>provider</code> block configures the Azure Resource Manager (AzureRM) provider. It includes:</p>
<ul>
<li><p><strong>Features Block:</strong> Enables additional provider features.</p>
</li>
<li><p><strong>Subscription ID:</strong> Specifies the Azure subscription for provisioning resources.</p>
</li>
</ul>
<p>This setup ensures that the Terraform backend remains decoupled from the static code and is dynamically configured during pipeline execution. It allows for:</p>
<ol>
<li><p>Environment-specific state management.</p>
</li>
<li><p>Enhanced security by avoiding hardcoding backend credentials.</p>
</li>
<li><p>Flexibility in scaling across multiple environments.</p>
</li>
</ol>
<h1 id="heading-dependency-lock-file">Dependency Lock File</h1>
<p>The <code>.terraform.lock.hcl</code> file ensures consistency and reproducibility of Terraform runs by locking provider dependencies to specific versions. When multiple team members or CI/CD pipelines work on the same Terraform configuration, this file ensures that all environments use the exact same provider versions. Provider updates may introduce breaking changes or unexpected behaviors. Locking versions prevents Terraform from automatically upgrading to potentially incompatible versions.</p>
<p>You can use the below command to generate a platform-independent lock file which will support all types of operating systems.</p>
<pre><code class="lang-yaml"><span class="hljs-string">terraform</span> <span class="hljs-string">providers</span> <span class="hljs-string">lock</span> <span class="hljs-string">-platform=windows_amd64</span> <span class="hljs-string">-platform=darwin_amd64</span> <span class="hljs-string">-platform=linux_amd64</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733541132346/a3b2097e-b36a-4337-bbdd-1a947f9b70ea.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Azure Application Gateway Implementation via Terraform]]></title><description><![CDATA[Azure Application Gateway is a web traffic load balancer that helps you manage traffic to your web applications. Unlike traditional load balancers, it operates at the application layer (Layer 7) of the OSI model.
Use-Case

Hosting Multi-Tier Applicat...]]></description><link>https://www.devopswithritesh.in/azure-application-gateway-implementation-via-terraform</link><guid isPermaLink="true">https://www.devopswithritesh.in/azure-application-gateway-implementation-via-terraform</guid><category><![CDATA[Azure]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[Azure Application Gateway]]></category><category><![CDATA[TerraformwithAzure]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[Devops articles]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Sat, 30 Nov 2024 02:13:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1732932599255/3aa49eca-40a3-4af8-81e3-b4ab995d1939.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Azure Application Gateway</strong> is a web traffic load balancer that helps you manage traffic to your web applications. Unlike traditional load balancers, it operates at the <strong>application layer (Layer 7)</strong> of the OSI model.</p>
<h2 id="heading-use-case">Use-Case</h2>
<ol>
<li><p><strong>Hosting Multi-Tier Applications</strong><br /> Route requests to different services within your application based on the URL path.<br /> Example: <code>/auth</code> routes to the authentication service, while <code>/orders</code> routes to the order management system.</p>
</li>
<li><p><strong>Protecting Applications with WAF</strong><br /> Use the <strong>WAF</strong> feature to secure your applications against web vulnerabilities, especially when hosting sensitive data or user-facing applications.</p>
</li>
<li><p><strong>Serving Multiple Applications on a Single Gateway</strong><br /> Host multiple websites or applications using different domain names or subdomains with a single Application Gateway instance.</p>
</li>
<li><p><strong>Modernizing Legacy Applications</strong><br /> Implement SSL offloading and URL-based routing to improve the performance of legacy systems that lack such capabilities.</p>
</li>
<li><p><strong>Scaling Secure Web Applications</strong><br /> Use autoscaling and load balancing to ensure availability and performance for applications experiencing variable or high traffic loads.</p>
</li>
<li><p><strong>Global Applications with Centralized Security</strong><br /> Centralize security policies for globally distributed applications using a single WAF-enabled Application Gateway.</p>
</li>
</ol>
<h1 id="heading-project-overview">Project Overview</h1>
<p>In this project, we are extending our network architecture by introducing a dedicated <strong>Application Gateway Subnet</strong>, similar to the previously configured <strong>App Subnet</strong> and <strong>DB Subnet</strong>.</p>
<h3 id="heading-key-configurations-for-azure-application-gateway"><strong>Key Configurations for Azure Application Gateway:</strong></h3>
<p>1️⃣ <strong>Application Gateway Backend Pool:</strong></p>
<ul>
<li>Associate the pool with the Web <strong>VMSS (Virtual Machine Scale Set)</strong> for handling application traffic.</li>
</ul>
<p>2️⃣ <strong>Frontend IP Configuration:</strong></p>
<ul>
<li>Link the <strong>Frontend IP object</strong> with a public IP to enable external access.</li>
</ul>
<p>3️⃣ <strong>Listeners Configuration:</strong></p>
<ul>
<li>Configure <strong>listeners</strong> to monitor incoming requests on <strong>port 80</strong>.</li>
</ul>
<p>4️⃣ <strong>Backend HTTP Settings:</strong></p>
<ul>
<li>Define settings to connect the <strong>backend pool</strong> (Web VMSS) with appropriate HTTP configurations.</li>
</ul>
<p>5️⃣ <strong>Routing Rules:</strong></p>
<ul>
<li>Establish routing <strong>rules</strong> to associate listeners with backend pools and HTTP settings.</li>
</ul>
<p>6️⃣ <strong>Health Probe Configuration:</strong></p>
<ul>
<li>Enable <strong>health probes</strong> to monitor and ensure the availability of backend instances.</li>
</ul>
<p>By the end of this project, the <strong>Azure Application Gateway</strong> will efficiently route and manage application traffic, ensuring optimal performance and availability.</p>
<h1 id="heading-provision-application-gateway-subnet">Provision Application Gateway Subnet</h1>
<p>This Terraform code configures the <strong>Application Gateway Subnet (AG Subnet)</strong>, associates it with a <strong>Network Security Group (NSG)</strong>, and creates inbound rules to allow traffic essential for the Application Gateway.</p>
<p>This setup ensures that the Application Gateway operates securely within its dedicated subnet while allowing only essential traffic. The NSG rules allow HTTP (80), HTTPS (443), and required ephemeral ports (65200-65535), ensuring proper communication and functionality.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Create AG Subnet</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet"</span> <span class="hljs-string">"ag_subnet"</span> {
  <span class="hljs-string">name</span>                 <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_virtual_network.vnet.name}-${var.ag_subnet_name}"</span>
  <span class="hljs-string">virtual_network_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_virtual_network.vnet.name</span>
  <span class="hljs-string">address_prefixes</span>     <span class="hljs-string">=</span> <span class="hljs-string">var.ag_subnet_address</span>
  <span class="hljs-string">resource_group_name</span>  <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}

<span class="hljs-comment"># Create NSG for Subnet</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_group"</span> <span class="hljs-string">"ag_snet_nsg"</span> {

  <span class="hljs-string">name</span>                <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_subnet.ag_subnet.name}-nsg"</span>
  <span class="hljs-string">location</span>            <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
  <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}

<span class="hljs-comment"># Associate ag_subnet with ag_snet_nsg</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet_network_security_group_association"</span> <span class="hljs-string">"associate_agnet_agnsg"</span> {
  <span class="hljs-string">depends_on</span>                <span class="hljs-string">=</span> [<span class="hljs-string">azurerm_network_security_rule.application_gtw_nsg_rules_inbound</span>]
  <span class="hljs-string">subnet_id</span>                 <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.ag_subnet.id</span>
  <span class="hljs-string">network_security_group_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.ag_snet_nsg.id</span>

}

<span class="hljs-comment"># Locals block for security rules</span>
<span class="hljs-string">locals</span> {
  <span class="hljs-string">ag_inbound_port_map</span> <span class="hljs-string">=</span> {
    <span class="hljs-comment"># priority:port</span>
    <span class="hljs-string">"100"</span> <span class="hljs-string">:</span> <span class="hljs-string">"80"</span>
    <span class="hljs-string">"110"</span> <span class="hljs-string">:</span> <span class="hljs-string">"443"</span>
    <span class="hljs-string">"130"</span> <span class="hljs-string">:</span> <span class="hljs-string">"65200-65535"</span>
  }
}

<span class="hljs-comment"># Create NSG Rules using azurerm_network_security_rule resource</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_rule"</span> <span class="hljs-string">"application_gtw_nsg_rules_inbound"</span> {
  <span class="hljs-string">for_each</span> <span class="hljs-string">=</span> <span class="hljs-string">local.ag_inbound_port_map</span>

  <span class="hljs-string">name</span>                        <span class="hljs-string">=</span> <span class="hljs-string">"Rule_Port_${each.value}"</span>
  <span class="hljs-string">access</span>                      <span class="hljs-string">=</span> <span class="hljs-string">"Allow"</span>
  <span class="hljs-string">direction</span>                   <span class="hljs-string">=</span> <span class="hljs-string">"Inbound"</span>
  <span class="hljs-string">network_security_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.ag_snet_nsg.name</span>
  <span class="hljs-string">priority</span>                    <span class="hljs-string">=</span> <span class="hljs-string">each.key</span>
  <span class="hljs-string">protocol</span>                    <span class="hljs-string">=</span> <span class="hljs-string">"Tcp"</span>
  <span class="hljs-string">source_port_range</span>           <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
  <span class="hljs-string">destination_port_range</span>      <span class="hljs-string">=</span> <span class="hljs-string">each.value</span>
  <span class="hljs-string">source_address_prefix</span>       <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
  <span class="hljs-string">destination_address_prefix</span>  <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
  <span class="hljs-string">resource_group_name</span>         <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}
</code></pre>
<h4 id="heading-1-create-an-application-gateway-subnet"><strong>1. Create an Application Gateway Subnet</strong></h4>
<p>The <code>azurerm_subnet</code> resource creates a subnet dedicated to the Application Gateway.</p>
<ul>
<li><p><strong>Key Attributes:</strong></p>
<ul>
<li><p><code>address_prefixes</code>: Specifies the address range for the subnet.</p>
</li>
<li><p><code>virtual_network_name</code>: Associates the subnet with the existing Virtual Network.</p>
</li>
<li><p><code>resource_group_name</code>: Links the subnet to the resource group.</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-2-create-network-security-group-nsg"><strong>2. Create Network Security Group (NSG)</strong></h4>
<p>The <code>azurerm_network_security_group</code> resource sets up an NSG for securing the Application Gateway Subnet.</p>
<ul>
<li><p><strong>Key Attributes:</strong></p>
<ul>
<li><p><code>location</code>: Specifies the region of the NSG.</p>
</li>
<li><p><code>resource_group_name</code>: Associates the NSG with the same resource group as the subnet.</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-3-associate-nsg-with-the-subnet"><strong>3. Associate NSG with the Subnet</strong></h4>
<p>The <code>azurerm_subnet_network_security_group_association</code> resource links the Application Gateway Subnet with its NSG.</p>
<ul>
<li><p><strong>Key Attributes:</strong></p>
<ul>
<li><p><code>subnet_id</code>: Identifies the Application Gateway Subnet.</p>
</li>
<li><p><code>network_security_group_id</code>: Associates the NSG with the subnet.</p>
</li>
</ul>
</li>
<li><p><strong>Dependency:</strong> This association is dependent on the creation of NSG rules to ensure proper configuration.</p>
</li>
</ul>
<h4 id="heading-4-define-local-map-for-port-rules"><strong>4. Define Local Map for Port Rules</strong></h4>
<p>The <code>locals</code> block contains a map of inbound ports with their respective priorities for the Application Gateway.</p>
<ul>
<li><p><strong>Example:</strong></p>
<ul>
<li><p>Port 80 for HTTP (priority 100).</p>
</li>
<li><p>Port 443 for HTTPS (priority 110).</p>
</li>
<li><p>Ports 65200–65535 for internal communication (priority 130).</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-5-create-nsg-rules"><strong>5. Create NSG Rules</strong></h4>
<p>The <code>azurerm_network_security_rule</code> resource dynamically generates inbound security rules using the <code>for_each</code> loop based on the port map defined in the <code>locals</code> block.</p>
<ul>
<li><p><strong>Key Attributes:</strong></p>
<ul>
<li><p><code>priority</code>: Ensures the order of rule evaluation.</p>
</li>
<li><p><code>destination_port_range</code>: Specifies the target port(s) allowed.</p>
</li>
<li><p><code>protocol</code>: Restricts to TCP traffic.</p>
</li>
<li><p><code>direction</code>: Allows only inbound traffic.</p>
</li>
<li><p><code>access</code>: Permits traffic matching the rule.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-ephemeral-ports">Ephemeral Ports</h3>
<p>Ephemeral ports, also known as <strong>dynamic ports</strong>, are short-lived ports automatically allocated by the operating system to facilitate communication between a client and a server. These ports are used as the <strong>source ports</strong> for outbound traffic in a TCP/IP connection and are released once the communication is complete.</p>
<h3 id="heading-why-are-ephemeral-ports-used-in-application-gateway"><strong>Why Are Ephemeral Ports Used in Application Gateway?</strong></h3>
<p>For Azure Application Gateway, ephemeral ports are crucial for managing backend connections. Here's why:</p>
<ol>
<li><p><strong>Backend Pool Communication</strong>: When the Application Gateway forwards client requests to backend servers (e.g., VMs or VMSS), it uses ephemeral ports for each session to establish a secure connection.</p>
</li>
<li><p><strong>Load Balancing</strong>: Helps maintain multiple simultaneous connections to different backend servers, improving performance.</p>
</li>
<li><p><strong>Avoiding Port Conflicts</strong>: Using a large range of dynamic ports reduces the risk of port conflicts between connections.</p>
</li>
</ol>
<h3 id="heading-in-context-of-the-nsg-rules"><strong>In Context of the NSG Rules</strong></h3>
<p>In the provided Terraform configuration, the rule to allow ephemeral ports (<strong>65200–65535</strong>) ensures:</p>
<ol>
<li><p><strong>Communication Between App Gateway and Backend Pool</strong>: This range allows the Application Gateway to connect to resources in its backend pool, such as VMs or VMSS.</p>
</li>
<li><p><strong>Smooth Functioning of HTTP/HTTPS Traffic</strong>: Without this rule, the Application Gateway might fail to establish connections with backend instances.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732589878606/29e69e4a-81fb-4c43-8e06-77ae0044dc8b.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-provision-application-gateway">Provision Application Gateway</h1>
<p>Provisioning the application gateway has multiple sub-components which will be explained below</p>
<h3 id="heading-1-create-a-public-ip-for-the-application-gateway"><strong>1. Create a Public IP for the Application Gateway</strong></h3>
<p>This public IP will be used for front-end communication.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Required Resource: Azure Application Gateway Public IP</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_public_ip"</span> <span class="hljs-string">"web_application_gateway_publicip"</span> {
    <span class="hljs-string">name</span>                <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-web-ag-publicip"</span>
    <span class="hljs-string">allocation_method</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Static"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">location</span>            <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">sku</span>                 <span class="hljs-string">=</span> <span class="hljs-string">"Standard"</span>
}
</code></pre>
<hr />
<h3 id="heading-2-define-local-variables"><strong>2. Define Local Variables</strong></h3>
<p>These variables simplify naming conventions and support multiple applications with context-path-based routing.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Azure AG Locals Block</span>
<span class="hljs-string">locals</span> {
  <span class="hljs-comment"># Generic Configuration</span>
  <span class="hljs-string">frontend_port_name</span>               <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_virtual_network.vnet.name}-feport"</span>
  <span class="hljs-string">frontend_ip_configuration_name</span>   <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_virtual_network.vnet.name}-feipconfig"</span>
  <span class="hljs-string">listener_name</span>                    <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_virtual_network.vnet.name}-httplistener"</span>
  <span class="hljs-string">request_routing_rule1_name</span>       <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_virtual_network.vnet.name}-requestrouting1"</span>

  <span class="hljs-comment"># Application-Specific Configuration (App1)</span>
  <span class="hljs-string">backend_address_pool_name_app1</span>   <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_virtual_network.vnet.name}-bepap_app1"</span>
  <span class="hljs-string">http_setting_name_app1</span>           <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_virtual_network.vnet.name}-behttpsettung_app1"</span>
  <span class="hljs-string">probe_name_app1</span>                  <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_virtual_network.vnet.name}-beprobe_app1"</span>
}
</code></pre>
<hr />
<h3 id="heading-3-provision-the-application-gateway"><strong>3. Provision the Application Gateway</strong></h3>
<p>This block provisions the gateway with a <strong>Standard_v2 SKU</strong> and configures autoscaling, frontend IP, listeners, backend pools, and routing rules.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Resource: Azure Application Gateway</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_application_gateway"</span> <span class="hljs-string">"web_application_gateway"</span> {
    <span class="hljs-string">name</span>                <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-web_application_gateway"</span>
    <span class="hljs-string">location</span>            <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

    <span class="hljs-comment"># Gateway SKU and Autoscaling</span>
    <span class="hljs-string">sku</span> {
      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard_v2"</span>
      <span class="hljs-string">tier</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard_v2"</span>
    }
    <span class="hljs-string">autoscale_configuration</span> {
      <span class="hljs-string">min_capacity</span> <span class="hljs-string">=</span> <span class="hljs-number">0</span>
      <span class="hljs-string">max_capacity</span> <span class="hljs-string">=</span> <span class="hljs-number">10</span> <span class="hljs-comment"># Max 125 for Standard_v2 SKU</span>
    }

    <span class="hljs-comment"># Gateway IP Configuration</span>
    <span class="hljs-string">gateway_ip_configuration</span> {
      <span class="hljs-string">name</span>     <span class="hljs-string">=</span> <span class="hljs-string">"ag_ip_config"</span>
      <span class="hljs-string">subnet_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.ag_subnet.id</span>
    }

    <span class="hljs-comment"># Frontend Configuration</span>
    <span class="hljs-string">frontend_port</span> {
      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">local.frontend_port_name</span>
      <span class="hljs-string">port</span> <span class="hljs-string">=</span> <span class="hljs-number">80</span>
    }
    <span class="hljs-string">frontend_ip_configuration</span> {
      <span class="hljs-string">name</span>                 <span class="hljs-string">=</span> <span class="hljs-string">local.frontend_ip_configuration_name</span>
      <span class="hljs-string">public_ip_address_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_public_ip.web_application_gateway_publicip.id</span>
      <span class="hljs-string">subnet_id</span>            <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.ag_subnet.id</span>
    }

    <span class="hljs-comment"># HTTP Listener</span>
    <span class="hljs-string">http_listener</span> {
      <span class="hljs-string">name</span>                         <span class="hljs-string">=</span> <span class="hljs-string">local.listener_name</span>
      <span class="hljs-string">frontend_ip_configuration_name</span> <span class="hljs-string">=</span> <span class="hljs-string">local.frontend_ip_configuration_name</span>
      <span class="hljs-string">frontend_port_name</span>           <span class="hljs-string">=</span> <span class="hljs-string">local.frontend_port_name</span>
      <span class="hljs-string">protocol</span>                     <span class="hljs-string">=</span> <span class="hljs-string">"Http"</span>
    }

    <span class="hljs-comment"># Backend Configuration for App1</span>
    <span class="hljs-string">backend_address_pool</span> {
      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">local.backend_address_pool_name_app1</span>
    }
    <span class="hljs-string">backend_http_settings</span> {
      <span class="hljs-string">name</span>                  <span class="hljs-string">=</span> <span class="hljs-string">local.http_setting_name_app1</span>
      <span class="hljs-string">cookie_based_affinity</span> <span class="hljs-string">=</span> <span class="hljs-string">"Disabled"</span>
      <span class="hljs-string">port</span>                  <span class="hljs-string">=</span> <span class="hljs-number">80</span>
      <span class="hljs-string">protocol</span>              <span class="hljs-string">=</span> <span class="hljs-string">"Http"</span>
      <span class="hljs-string">request_timeout</span>       <span class="hljs-string">=</span> <span class="hljs-number">60</span>
      <span class="hljs-string">probe_name</span>            <span class="hljs-string">=</span> <span class="hljs-string">local.probe_name_app1</span>
    }

    <span class="hljs-comment"># Health Probe for App1</span>
    <span class="hljs-string">probe</span> {
      <span class="hljs-string">name</span>                 <span class="hljs-string">=</span> <span class="hljs-string">local.probe_name_app1</span>
      <span class="hljs-string">host</span>                 <span class="hljs-string">=</span> <span class="hljs-string">"127.0.0.1"</span>
      <span class="hljs-string">interval</span>             <span class="hljs-string">=</span> <span class="hljs-number">30</span>
      <span class="hljs-string">timeout</span>              <span class="hljs-string">=</span> <span class="hljs-number">30</span>
      <span class="hljs-string">unhealthy_threshold</span>  <span class="hljs-string">=</span> <span class="hljs-number">3</span>
      <span class="hljs-string">protocol</span>             <span class="hljs-string">=</span> <span class="hljs-string">"Http"</span>
      <span class="hljs-string">port</span>                 <span class="hljs-string">=</span> <span class="hljs-number">80</span>
      <span class="hljs-string">path</span>                 <span class="hljs-string">=</span> <span class="hljs-string">"/app1/status.html"</span>
      <span class="hljs-string">match</span> {
        <span class="hljs-string">body</span>         <span class="hljs-string">=</span> <span class="hljs-string">"App1"</span>
        <span class="hljs-string">status_code</span>  <span class="hljs-string">=</span> [<span class="hljs-string">"200"</span>]
      }
    }

    <span class="hljs-comment"># Request Routing Rule</span>
    <span class="hljs-string">request_routing_rule</span> {
      <span class="hljs-string">name</span>                        <span class="hljs-string">=</span> <span class="hljs-string">local.request_routing_rule1_name</span>
      <span class="hljs-string">rule_type</span>                   <span class="hljs-string">=</span> <span class="hljs-string">"Basic"</span>
      <span class="hljs-string">http_listener_name</span>          <span class="hljs-string">=</span> <span class="hljs-string">local.listener_name</span>
      <span class="hljs-string">backend_address_pool_name</span>   <span class="hljs-string">=</span> <span class="hljs-string">local.backend_address_pool_name_app1</span>
      <span class="hljs-string">backend_http_settings_name</span>  <span class="hljs-string">=</span> <span class="hljs-string">local.http_setting_name_app1</span>
    }
}
</code></pre>
<hr />
<h3 id="heading-4-key-features-of-the-configuration"><strong>4. Key Features of the Configuration</strong></h3>
<ol>
<li><p><strong>Public IP</strong>:</p>
<ul>
<li>Allocated statically and linked to the Application Gateway.</li>
</ul>
</li>
<li><p><strong>Autoscaling</strong>:</p>
<ul>
<li>The gateway dynamically scales between 0 to 10 instances (configurable up to 125).</li>
</ul>
</li>
<li><p><strong>Context-Path-Based Routing</strong>:</p>
<ul>
<li>Routes traffic based on the URL path (<code>/app1</code> goes to App1 VMSS).</li>
</ul>
</li>
<li><p><strong>Health Probes</strong>:</p>
<ul>
<li>Ensures backend VMSS health with custom probes targeting <a target="_blank" href="http://127.0.0.1/app1/status.html"><code>http://127.0.0.1/app1/status.html</code></a>.</li>
</ul>
</li>
<li><p><strong>Listener and Rule Configuration</strong>:</p>
<ul>
<li>Listens on port 80 and applies a request-routing rule to backend pools.</li>
</ul>
</li>
</ol>
<hr />
<h3 id="heading-5-attach-the-application-gateway-with-vmss-backend"><strong>5. Attach the Application Gateway with VMSS Backend</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732934101319/7624a063-ad84-4a5b-942e-86005ea0b0b9.png" alt class="image--center mx-auto" /></p>
<p>Previously, the Virtual Machine Scale Set (VMSS) was configured with a standard load balancer for traffic distribution. In this setup, we have transitioned the VMSS to utilize the <strong>Application Gateway's backend address pool</strong>.</p>
<h3 id="heading-how-it-works"><strong>How It Works</strong></h3>
<ol>
<li><p>A <strong>public IP</strong> is provisioned for external communication.</p>
</li>
<li><p>The <strong>Application Gateway</strong> is deployed in a dedicated subnet.</p>
</li>
<li><p><strong>Frontend configuration</strong> is set up for HTTP traffic.</p>
</li>
<li><p>A <strong>listener</strong> receives incoming requests.</p>
</li>
<li><p><strong>Backend pools</strong> and <strong>routing rules</strong> direct traffic to the appropriate VMSS based on the URL path.</p>
</li>
<li><p><strong>Health probes</strong> ensure backend availability.</p>
</li>
</ol>
<hr />
<p>This Terraform configuration enables a scalable and efficient Azure Application Gateway setup with context-path-based routing and dynamic scaling that is suitable for modern applications.</p>
]]></content:encoded></item><item><title><![CDATA[Enhance Azure Traffic Manager with Terraform: Leveraging Data Sources]]></title><description><![CDATA[Azure Traffic Manager is a robust DNS-based traffic load balancer that enables you to distribute network traffic efficiently across multiple regions. Imagine a scenario where you've deployed both web and application servers in various regions to ensu...]]></description><link>https://www.devopswithritesh.in/enhance-azure-traffic-manager-with-terraform-leveraging-data-sources</link><guid isPermaLink="true">https://www.devopswithritesh.in/enhance-azure-traffic-manager-with-terraform-leveraging-data-sources</guid><category><![CDATA[Azure]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[TerraformwithAzure]]></category><category><![CDATA[azure traffic manager]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[IaC (Infrastructure as Code)]]></category><category><![CDATA[AzureRM]]></category><category><![CDATA[azure-devops]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Wed, 20 Nov 2024 07:12:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1732086621297/6cf6a2b6-b1bb-4793-a31c-b4951a16a10e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Azure Traffic Manager is a robust DNS-based traffic load balancer that enables you to distribute network traffic efficiently across multiple regions. Imagine a scenario where you've deployed both web and application servers in various regions to ensure high availability and disaster recovery. Each of these servers is fronted by Azure Load Balancers with public IPs exposed to handle incoming traffic. To optimize the distribution of traffic among these multiple regions and ensure seamless failover in case of regional outages, configuring Azure Traffic Manager is essential.</p>
<p>By integrating Azure Traffic Manager with your Terraform Infrastructure as Code (IaC) setup, you can automate the deployment and management of traffic routing policies, enhancing both the performance and resilience of your applications. This approach not only simplifies the configuration process but also ensures consistency and repeatability across your infrastructure deployments.</p>
<h1 id="heading-terraform-remote-state-datasource">Terraform Remote State Datasource</h1>
<p>Terraform <strong><em>remote state data source</em></strong> retrieves the output values of the root module from some other terraform configuration using the latest state snapshot from the remote backend.</p>
<h2 id="heading-use-case-terraform-remote-state-data-source-in-azure-traffic-manager-setup">Use Case: Terraform Remote State Data Source in Azure Traffic Manager Setup</h2>
<p>In this project, we are deploying web servers and app servers in <strong>two different Azure regions</strong>, each fronted by its own Load Balancer with a publicly exposed IP address. To ensure seamless load balancing across these regions, we will use <strong>Azure Traffic Manager</strong>, which will reside in a separate Terraform project.</p>
<h4 id="heading-how-remote-state-data-source-is-used"><strong>How Remote State Data Source is Used</strong></h4>
<ol>
<li><p><strong>Regional Load Balancers</strong>: Each region’s Load Balancer is provisioned within its respective Terraform configuration.</p>
</li>
<li><p><strong>Azure Traffic Manager Configuration</strong>: Azure Traffic Manager, in a separate Terraform configuration, retrieves the <strong>public IPs</strong> of these Load Balancers using the <strong>Terraform Remote State Data Source</strong>.</p>
</li>
<li><p><strong>Traffic Balancing</strong>: Azure Traffic Manager distributes traffic between the regions based on its configured traffic-routing method (e.g., performance, priority, or geographic).</p>
</li>
</ol>
<h4 id="heading-key-benefits-of-this-approach"><strong>Key Benefits of this Approach</strong></h4>
<ul>
<li><p><strong>Seamless Integration</strong>: The remote state data source eliminates manual intervention by dynamically fetching the required values.</p>
</li>
<li><p><strong>Decoupled Configurations</strong>: Load Balancers and Azure Traffic Manager are managed in separate configurations, enhancing modularity and maintainability.</p>
</li>
<li><p><strong>Efficient Failover and Load Balancing</strong>: With Azure Traffic Manager, traffic is intelligently routed to the most suitable region based on configuration, improving application availability and performance.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731933828239/27de4484-5473-4a69-9b21-6de4dc00c99c.png" alt class="image--center mx-auto" /></p>
<p>we manage two distinct Terraform projects, each responsible for deploying infrastructure in a separate Azure region. These projects generate <strong>independent state files</strong> to represent the deployed resources in their respective regions i.e. eastus2 and westus2</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731934516443/a76d46f3-89a1-4fe1-b3ce-d0715ae0b7fc.png" alt class="image--center mx-auto" /></p>
<p>As we have provisioned 2 identical infrastructures in their respective regions. Now the Remote State Datastore will refer to both the state files present in the storage container when and wherever required.</p>
<h1 id="heading-configure-remote-state-datasource">Configure Remote State Datasource</h1>
<p>The <strong>Remote State Data Source</strong> enables seamless integration across multiple Terraform projects. In this case, it will be used while creating the <strong>Azure Traffic Manager</strong> to establish connectivity between servers deployed in <strong>two different regions</strong>.</p>
<p><strong>Project-1 Data Source (EASTUS2)</strong></p>
<ul>
<li><p>Retrieves the Terraform state for resources deployed in the <strong>EASTUS2</strong> region.</p>
</li>
<li><p>Key configuration parameters include the <strong>storage account</strong>, <strong>resource group</strong>, and <strong>state file path</strong>.</p>
</li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-comment"># Project-1 Datasource(EASTUS2)</span>
<span class="hljs-string">data</span> <span class="hljs-string">"terraform_remote_state"</span> <span class="hljs-string">"project1_eastus"</span> {
    <span class="hljs-string">backend</span> <span class="hljs-string">=</span> <span class="hljs-string">"azurerm"</span>
    <span class="hljs-string">config</span> <span class="hljs-string">=</span> {
        <span class="hljs-string">storage_account_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"terraformstatestore0"</span>
        <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"terraform-storageAcc-rg"</span>
        <span class="hljs-string">container_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"tfstatefiles"</span>
        <span class="hljs-string">key</span> <span class="hljs-string">=</span> <span class="hljs-string">"project-1-eastus2-terraform.tfstate"</span>
        <span class="hljs-string">subscription_id</span> <span class="hljs-string">=</span> <span class="hljs-string">"xxxxxxx-xxxxxxxx-xxxxx"</span>
    }

}

<span class="hljs-comment"># Project-2 Datasource(WESTUS)</span>
<span class="hljs-string">data</span> <span class="hljs-string">"terraform_remote_state"</span> <span class="hljs-string">"project2_westus"</span> {
    <span class="hljs-string">backend</span> <span class="hljs-string">=</span> <span class="hljs-string">"azurerm"</span>
    <span class="hljs-string">config</span> <span class="hljs-string">=</span> {
        <span class="hljs-string">storage_account_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"terraformstatestore0"</span>
        <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"terraform-storageAcc-rg"</span>
        <span class="hljs-string">container_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"tfstatefiles"</span>
        <span class="hljs-string">key</span> <span class="hljs-string">=</span> <span class="hljs-string">"project-2-westus2-terraform.tfstate"</span>
        <span class="hljs-string">subscription_id</span> <span class="hljs-string">=</span> <span class="hljs-string">"xxxx-xxxxxxxxxxx-xxxxxxx"</span>
    }

}
</code></pre>
<p><strong>Project-2 Data Source (WESTUS)</strong></p>
<ul>
<li><p>Retrieves the Terraform state for resources deployed in the <strong>WESTUS</strong> region.</p>
</li>
<li><p>Similar parameters are defined to connect to the <strong>remote backend</strong>.</p>
</li>
</ul>
<p>These data sources allow the <strong>Azure Traffic Manager</strong> to fetch public IPs from regional Load Balancers dynamically. Projects are managed separately, reducing complexity and enhancing modularity.</p>
<h1 id="heading-important-consideration-before-implementing-azure-traffic-manager">Important Consideration Before Implementing Azure Traffic Manager</h1>
<p>Before implementing <strong>Azure Traffic Manager</strong> with the <code>azurerm_traffic_manager_azure_endpoint</code> resource, ensure that the <code>domain_name_label</code> attribute is defined in the <strong>Load Balancer Public IP</strong>. This is a necessary step for creating the Fully Qualified Domain Name (FQDN). Without this, Traffic Manager will not be able to resolve the FQDN correctly.</p>
<p>If the <code>domain_name_label</code> is not set or cannot be used, you will need to proceed with the <code>azurerm_traffic_manager_external_endpoint</code> resource, which allows you to use external endpoints with direct IP addresses instead of relying on the FQDN.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732085629106/ea5767cd-2367-411a-a677-9db597d8f8cc.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-provision-azure-traffic-manager">Provision Azure Traffic Manager</h1>
<p>For provisioning Azure Traffic Manager via Terraform we will require the following terraform resource, respective input variable, and most importantly above defined <strong>Data Sources</strong>.</p>
<ul>
<li><p>azurerm_traffic_manager_profile</p>
</li>
<li><p>azurerm_traffic_manager_azure_endpoint</p>
</li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-comment"># Resorce1: Traffic Manager Profile</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_traffic_manager_profile"</span> <span class="hljs-string">"tm_profile"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"tmprofile-${random_string.random_name.id}"</span>   <span class="hljs-comment"># Name has to be unique across azure cloud</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">traffic_routing_method</span> <span class="hljs-string">=</span> <span class="hljs-string">"Weighted"</span>
    <span class="hljs-string">dns_config</span> {
      <span class="hljs-string">relative_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"tmprofile-${random_string.random_name.id}"</span>
      <span class="hljs-string">ttl</span> <span class="hljs-string">=</span> <span class="hljs-number">100</span>
    }
    <span class="hljs-string">monitor_config</span> {
      <span class="hljs-string">protocol</span> <span class="hljs-string">=</span> <span class="hljs-string">"http"</span>
      <span class="hljs-string">port</span> <span class="hljs-string">=</span> <span class="hljs-number">80</span>
      <span class="hljs-string">path</span> <span class="hljs-string">=</span> <span class="hljs-string">"/"</span>
      <span class="hljs-string">interval_in_seconds</span> <span class="hljs-string">=</span> <span class="hljs-number">30</span>
      <span class="hljs-string">timeout_in_seconds</span> <span class="hljs-string">=</span> <span class="hljs-number">9</span>
      <span class="hljs-string">tolerated_number_of_failures</span> <span class="hljs-string">=</span> <span class="hljs-number">3</span>
    }

    <span class="hljs-string">tags</span> <span class="hljs-string">=</span> <span class="hljs-string">local.common_tags</span>

}


<span class="hljs-comment"># Traffic Manager Endpoint - Project-1-EastUs2</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_traffic_manager_azure_endpoint"</span> <span class="hljs-string">"tm_endoint_project1_eastus2"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"tm-project1-eastus2"</span>
    <span class="hljs-string">profile_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_traffic_manager_profile.tm_profile.id</span>
    <span class="hljs-string">weight</span> <span class="hljs-string">=</span> <span class="hljs-number">50</span>
    <span class="hljs-string">target_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">data.terraform_remote_state.project1_eastus.outputs.web_lb_public_ip_id</span>  <span class="hljs-comment"># This is being refered from Remote datasource</span>
}

<span class="hljs-comment"># Traffic Manager Endpoint - Project-2-WestUs2</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_traffic_manager_azure_endpoint"</span> <span class="hljs-string">"tm_endoint_project1_westus2"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"tm-project1-westus2"</span>
    <span class="hljs-string">profile_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_traffic_manager_profile.tm_profile.id</span>
    <span class="hljs-string">weight</span> <span class="hljs-string">=</span> <span class="hljs-number">50</span>
    <span class="hljs-string">target_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">data.terraform_remote_state.project2_westus.outputs.web_lb_public_ip_id</span>  <span class="hljs-comment"># This is being refered from Remote datasource</span>
}
</code></pre>
<h4 id="heading-1-traffic-manager-profile"><strong>1. Traffic Manager Profile</strong></h4>
<p>The <code>azurerm_traffic_manager_profile</code> resource defines the Traffic Manager instance. The profile manages traffic routing across regions using the <strong>Weighted Routing</strong> method, which distributes traffic based on assigned weights.</p>
<ul>
<li><p><strong>Key Configurations:</strong></p>
<ul>
<li><p><strong>Name</strong>: A unique name for the profile (uses <code>random_string</code> to ensure uniqueness globally).</p>
</li>
<li><p><strong>Traffic Routing Method</strong>: Defines the logic for routing traffic; in this case, <strong>Weighted</strong> is used to balance traffic based on specified weights.</p>
</li>
<li><p><strong>DNS Config</strong>: Sets up the DNS entry for Traffic Manager with a time-to-live (TTL) value of 100 seconds.</p>
</li>
<li><p><strong>Monitor Config</strong>: Configures health monitoring for endpoints, including the HTTP protocol, port, health check path, and failure tolerances.</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-2-traffic-manager-endpoints"><strong>2. Traffic Manager Endpoints</strong></h4>
<p>The endpoints in Traffic Manager represent the target resources (e.g., Load Balancers) that handle traffic in specific regions. Terraform’s <code>azurerm_traffic_manager_azure_endpoint</code> resource creates these connections.</p>
<p><strong>Endpoint for Project-1 (EastUS2)</strong></p>
<ul>
<li><p>Represents the Load Balancer in the <strong>EastUS2</strong> region.</p>
</li>
<li><p>Uses Terraform’s <strong>Remote State Datasource</strong> to retrieve the public IP of the Load Balancer (<code>web_lb_public_ip_id</code>) dynamically from the state file of the respective project.</p>
</li>
<li><p>Assigned a weight of 50, which defines the proportion of traffic routed to this endpoint.</p>
</li>
</ul>
<p><strong>Endpoint for Project-2 (WestUS)</strong></p>
<ul>
<li><p>Represents the Load Balancer in the <strong>WestUS</strong> region.</p>
</li>
<li><p>Similarly, uses <strong>Remote State Datasource</strong> to fetch the public IP (<code>web_lb_public_ip_id</code>) from the state file of Project-2.</p>
</li>
<li><p>Assigned an equal weight of 50 for traffic distribution.</p>
</li>
</ul>
<h4 id="heading-how-it-works"><strong>How It Works</strong></h4>
<ol>
<li><p><strong>Traffic Manager Profile</strong>:</p>
<ul>
<li><p>Acts as a global entry point for incoming traffic.</p>
</li>
<li><p>DNS configuration ensures that the Traffic Manager routes requests efficiently based on health and weight.</p>
</li>
</ul>
</li>
<li><p><strong>Remote State Integration</strong>:</p>
<ul>
<li><p>Fetches real-time public IPs of Load Balancers deployed in different projects (EastUS2 and WestUS).</p>
</li>
<li><p>Enables seamless integration of regional resources without hardcoding IP addresses.</p>
</li>
</ul>
</li>
<li><p><strong>Load Balancing Logic</strong>:</p>
<ul>
<li><p>The <strong>Weighted Routing</strong> method splits traffic evenly (50:50) between the two regions, ensuring high availability.</p>
</li>
<li><p>Health monitoring ensures only healthy endpoints receive traffic.</p>
</li>
</ul>
</li>
</ol>
<h1 id="heading-traffic-manager-deployment-verification">Traffic Manager Deployment Verification</h1>
<p>The <strong>Azure Traffic Manager</strong> has been successfully deployed, and its Fully Qualified Domain Name (FQDN) is now available. The FQDN of the Traffic Manager profile can be accessed as shown below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732084359372/52ebd315-ccfc-4751-9994-62bdee868c3a.png" alt class="image--center mx-auto" /></p>
<p>Additionally, we have verified the accessibility of the Traffic Manager's FQDN, confirming that traffic is being routed properly across the configured endpoints. The successful access test is shown below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732084651398/cdfeb79f-d401-483c-9a46-6446c35ddee5.png" alt class="image--center mx-auto" /></p>
<p>This verifies that the <strong>Traffic Manager</strong> is working as expected, balancing the traffic between the specified endpoints based on the configuration.</p>
<h1 id="heading-verification-on-azure-portal">Verification on Azure Portal</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1732084802625/bb342107-b370-4295-a704-9949add71690.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Streamlining App Traffic: Deploying Azure Internal & External Load Balancers with Terraform IAC]]></title><description><![CDATA[In Azure, Internal and External Load Balancers serve as crucial network components that distribute traffic effectively, help ensure high availability, and play a significant role in securing network traffic. We have earlier deployed an External LB in...]]></description><link>https://www.devopswithritesh.in/streamlining-app-traffic-deploying-azure-internal-external-load-balancers-with-terraform-iac</link><guid isPermaLink="true">https://www.devopswithritesh.in/streamlining-app-traffic-deploying-azure-internal-external-load-balancers-with-terraform-iac</guid><category><![CDATA[Azure]]></category><category><![CDATA[TerraformwithAzure]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[Azure load Balancer]]></category><category><![CDATA[#Terraform #AWS #InfrastructureAsCode #Provisioning #Automation #CloudComputing]]></category><category><![CDATA[Devops articles]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Thu, 31 Oct 2024 15:44:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730285410513/d322f375-a4d7-41f3-a88d-0d6d2fd0689d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In Azure, <strong>Internal</strong> and <strong>External Load Balancers</strong> serve as crucial network components that distribute traffic effectively, help ensure high availability, and play a significant role in securing network traffic. We have earlier deployed an <strong>External LB</strong> in <strong>Web Subnet</strong> and in this article, we’ll be deploying an Internal LB in the <strong>App Subnet.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730281989542/36ed3983-67bd-44d0-af43-a3e1e519d536.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-internal-load-balancer">Internal Load Balancer</h1>
<p>An Internal Load Balancer (ILB) is restricted to a Virtual Network (VNet) or a subset of subnets within the VNet. It is accessible only within private IP ranges and doesn’t have a public endpoint.</p>
<p>In <strong>App Subnets</strong>, the ILB handles traffic between internal services, such as between the application layer and the database layer, without exposing the services to the internet. By keeping this traffic internal, ILBs add an extra layer of security, ensuring that sensitive data and application communications stay within the network boundaries.</p>
<p>It limits exposure of critical resources by keeping inter-service traffic within private IP ranges as well as reduces the attack surface since resources do not need to be publicly accessible and helps enforce strict network segmentation, ensuring that only internal resources can interact with each other within controlled boundaries.</p>
<h2 id="heading-external-load-balancer">External Load Balancer</h2>
<p>An External Load Balancer (ELB), or public load balancer, has a public IP address and is accessible over the internet.</p>
<p>In <strong>Web Subnets</strong>, an ELB distributes incoming internet traffic across the frontend or web servers in the subnet, typically hosted in a DMZ (Demilitarized Zone) within the VNet. It handles user traffic and ensures users have uninterrupted access to web services or applications, even during high-demand periods.</p>
<p>Controls the flow of incoming requests to the web servers, acting as a buffer to prevent direct access to backend or database layers. Integrates with security features such as NSGs (Network Security Groups), DDoS protection, and web application firewalls (WAF) to further protect against common web threats.</p>
<h1 id="heading-ilb-amp-elb-complement-each-other">ILB &amp; ELB Complement Each Other</h1>
<p>Both ILBs and ELBs play essential roles in securing network traffic in a layered architecture:</p>
<ul>
<li><p><strong>Internal Isolation</strong>: ILBs ensure that backend services are isolated and can only communicate internally, reducing the chance of unauthorized access from external sources.</p>
</li>
<li><p><strong>Traffic Control</strong>: ELBs manage all incoming traffic from the internet, filtering it before reaching any critical application layers.</p>
</li>
<li><p><strong>Enhanced Security Posture</strong>: Together, they create a layered security approach where only necessary traffic flows between layers, reducing exposure and ensuring tighter security across the entire VNet.</p>
</li>
</ul>
<h3 id="heading-how-they-complement-each-other"><strong>How They Complement Each Other</strong></h3>
<p>In a typical web application architecture:</p>
<ul>
<li><p>The <strong>External Load Balancer</strong> manages internet-facing traffic for the web layer in the <strong>Web Subnet</strong>.</p>
</li>
<li><p>The <strong>Internal Load Balancer</strong> facilitates secure communication between the web layer and the <strong>App Subnet</strong>, and similarly from the <strong>App Subnet</strong> to other layers (e.g., database subnet).</p>
</li>
</ul>
<h1 id="heading-required-azure-resources-for-internal-lb">Required Azure Resources for Internal LB</h1>
<p>To provision an internal load balancer in the app subnet, several Azure resources are needed to ensure secure and efficient network traffic management without exposing the application to the public internet. The setup includes resources for NAT gateway configuration, internal load balancing, and storage for necessary configuration files. Each resource plays a role in directing internal traffic, enhancing security, and maintaining seamless communication within the application architecture.</p>
<p>This setup leverages key resources such as <strong>NAT Gateway</strong> with associated public IPs for secure internet access, <strong>Load Balancer</strong> components (backend pools, health probes, load balancing rules), and <strong>Storage Account</strong> resources for configuration file management within VMs. Together, these components provide a robust foundation for internal network traffic management within the application subnet.</p>
<h3 id="heading-1-storage-account-setup">1) Storage Account Setup</h3>
<p>To support file operations required for our deployment, a storage account and blob storage are essential. These will enable secure storage and retrieval of configuration files and scripts needed for the virtual machines managed by the Azure VM Scale Set.</p>
<p>Primary resources include:</p>
<ul>
<li><p><strong>azurerm_storage_account</strong>: Creates the storage account to store files and configuration data.</p>
</li>
<li><p><strong>azurerm_storage_container</strong>: Organizes and contains the blob files within the storage account.</p>
</li>
<li><p><strong>azurerm_storage_blob</strong>: Manages individual files to be stored and accessed for configurations.</p>
</li>
</ul>
<p>This setup ensures that necessary files are accessible to VM instances, enhancing automation and configuration management across the scale set.</p>
<h3 id="heading-2-nat-gateway-resource">2) NAT Gateway Resource</h3>
<p>Since the Internal Load Balancer (ILB) will reside in the app subnet without a public IP, a NAT gateway is necessary to enable outbound internet connectivity for resources in this subnet. The NAT gateway will route public internet traffic through a designated public IP, ensuring secure and controlled access.</p>
<p>Key resources include:</p>
<ul>
<li><p><strong>azurerm_public_ip</strong>: Provides the public IP address for the NAT gateway.</p>
</li>
<li><p><strong>azurerm_nat_gateway</strong>: Sets up the NAT gateway instance to manage outbound traffic.</p>
</li>
<li><p><strong>azurerm_nat_gateway_public_ip_association</strong>: Links the NAT gateway with the public IP.</p>
</li>
<li><p><strong>azurerm_subnet_nat_gateway_association</strong>: Associates the NAT gateway with the app subnet.</p>
</li>
</ul>
<p>This configuration ensures that resources behind the ILB can securely access the internet while maintaining the private, internal-only accessibility for incoming traffic.</p>
<h3 id="heading-3-internal-load-balancer-resource">3) Internal Load Balancer Resource</h3>
<p>We've previously utilized resources to configure an external load balancer within the web subnet. Now, we’ll proceed by setting up the internal load balancer (ILB) in the app subnet, enhancing traffic control and security within our network.</p>
<p>Key resources in use include:</p>
<ul>
<li><p><strong>azurerm_lb</strong>: Defines the load balancer instance.</p>
</li>
<li><p><strong>azurerm_lb_backend_address_pool</strong>: Configures backend pools for VM traffic distribution.</p>
</li>
<li><p><strong>azurerm_lb_probe</strong>: Sets up health probes to monitor VM health.</p>
</li>
<li><p><strong>azurerm_lb_rule</strong>: Manages the load balancing rules for specific traffic handling.</p>
</li>
<li><p><strong>azurerm_network_interface_backend_address_pool_association</strong>: Links VM NICs to the backend address pool.</p>
</li>
</ul>
<p>This setup will enable efficient traffic routing and help in isolating internal applications, ensuring enhanced security and optimized performance across the application tier.</p>
<h1 id="heading-storage-account-amp-blob-container">Storage Account &amp; Blob Container</h1>
<p>Before provisioning the load balancer, it’s essential to set up a storage account to store configuration files that will be utilized by the virtual machines provisioned through Azure Virtual Machine Scale Sets (VMSS). This process involves first creating a storage account and then establishing a storage container to securely store and organize these configuration files for seamless access during VM initialization.</p>
<h3 id="heading-storage-account-input-variables">Storage Account Input Variables</h3>
<pre><code class="lang-yaml"><span class="hljs-string">variable</span> <span class="hljs-string">"storage_account_name"</span> {
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Name of the storage accoun"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>

}
<span class="hljs-string">variable</span> <span class="hljs-string">"storage_account_tier"</span> {
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Storage account tier"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>

}
<span class="hljs-string">variable</span> <span class="hljs-string">"storage_account_replication_type"</span> {
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Storage Account Replication Type"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>

}
<span class="hljs-string">variable</span> <span class="hljs-string">"storage_account_kind"</span> {
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Storage account kind"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>

}
<span class="hljs-string">variable</span> <span class="hljs-string">"static_websit_index_document"</span> {
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"static website index document"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>

}
<span class="hljs-string">variable</span> <span class="hljs-string">"static_website_error_404_document"</span> {
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"static website error 404"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>

}
</code></pre>
<h3 id="heading-storage-account-resource">Storage Account Resource</h3>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_storage_account"</span> <span class="hljs-string">"storage_account_lb"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${var.storage_account_name}${random_string.random_name.id}"</span>
    <span class="hljs-string">account_replication_type</span> <span class="hljs-string">=</span> <span class="hljs-string">var.storage_account_replication_type</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">access_tier</span> <span class="hljs-string">=</span> <span class="hljs-string">var.storage_account_tier</span>
    <span class="hljs-string">account_kind</span> <span class="hljs-string">=</span> <span class="hljs-string">var.storage_account_kind</span>
    <span class="hljs-string">account_tier</span> <span class="hljs-string">=</span> <span class="hljs-string">var.storage_account_tier</span>

    <span class="hljs-string">static_website</span> {
      <span class="hljs-string">index_document</span> <span class="hljs-string">=</span> <span class="hljs-string">var.static_websit_index_document</span>
      <span class="hljs-string">error_404_document</span> <span class="hljs-string">=</span> <span class="hljs-string">var.static_website_error_404_document</span>
    }

}

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_storage_container"</span> <span class="hljs-string">"storage_container"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-httpd-files-container"</span>
    <span class="hljs-string">storage_account_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_storage_account.storage_account_lb.name</span>
    <span class="hljs-string">container_access_type</span> <span class="hljs-string">=</span> <span class="hljs-string">"private"</span>

}
<span class="hljs-string">locals</span> {
  <span class="hljs-string">httpd_files</span> <span class="hljs-string">=</span> [<span class="hljs-string">"app1.conf"</span>]
}

<span class="hljs-comment"># Resource to upload file to the container</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_storage_blob"</span> <span class="hljs-string">"storage_container_blob"</span> {
    <span class="hljs-string">for_each</span> <span class="hljs-string">=</span> <span class="hljs-string">local.httpd_files</span>
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"upload_${each.value}_file"</span>
    <span class="hljs-string">storage_account_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_storage_account.storage_account_lb.name</span>
    <span class="hljs-string">storage_container_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_storage_container.storage_container.name</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">"Block"</span>
    <span class="hljs-string">source</span> <span class="hljs-string">=</span> <span class="hljs-string">"${path.module}/app-scripts/${each. Value}"</span>

}
</code></pre>
<ul>
<li><p><strong>Storage Account (</strong><code>azurerm_storage_account</code>):</p>
<ul>
<li><p>Configured with a unique name using <code>random_string</code>, this storage account supports replication and access tier based on provided variables.</p>
</li>
<li><p>The <code>static_website</code> block enables static website hosting, specifying the index and error documents.</p>
</li>
</ul>
</li>
<li><p><strong>Storage Container (</strong><code>azurerm_storage_container</code>):</p>
<ul>
<li>A private container named with a resource prefix is created within the storage account to securely store files required by the application.</li>
</ul>
</li>
<li><p><strong>Storage Blob (</strong><code>azurerm_storage_blob</code>):</p>
<ul>
<li>Files listed in <code>httpd_files</code> are uploaded as blobs to the storage container. Each file is sourced from the local module path, allowing application-specific configurations (e.g., <code>app1.conf</code>) to be accessible within the VMSS environment.</li>
</ul>
</li>
</ul>
<h3 id="heading-storage-account-outputs">Storage Account Outputs</h3>
<pre><code class="lang-yaml"><span class="hljs-string">output</span> <span class="hljs-string">"storage_account_primary_access_key"</span> {
    <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_storage_account.storage_account_lb.primary_access_key</span>
    <span class="hljs-string">sensitive</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>

}
<span class="hljs-string">output</span> <span class="hljs-string">"storage_account_primary_web_endpoint"</span> {
    <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_storage_account.storage_account_lb.primary_web_endpoint</span>

}
<span class="hljs-string">output</span> <span class="hljs-string">"storage_account_primary_web_host"</span> {
    <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_storage_account.storage_account_lb.primary_web_host</span>

}
<span class="hljs-string">output</span> <span class="hljs-string">"storage_account_name"</span> {
    <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_storage_account.storage_account_lb.name</span>

}
</code></pre>
<h1 id="heading-nat-gateway-amp-association-with-app-subnet">NAT Gateway &amp; Association with App Subnet</h1>
<p>In the next steps, we’ll configure a <strong>NAT Gateway</strong> to manage secure inbound traffic for the application subnet. This involves:</p>
<ol>
<li><p><strong>Creating a Public IP Resource</strong>: A public IP will be created specifically for the NAT Gateway, ensuring controlled public access.</p>
</li>
<li><p><strong>Deploying the NAT Gateway</strong>: This gateway will route incoming traffic securely through an internal load balancer, keeping the app subnet isolated from direct public access.</p>
</li>
<li><p><strong>Associating NAT Gateway with Resources</strong>: Finally, we’ll link the NAT Gateway to the created public IP and associate it with the App subnet, directing traffic through the designated internal load balancer for secure connectivity.</p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-comment">#Create Public IP for Nat Gateway</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_public_ip"</span> <span class="hljs-string">"nat_gateway_publicip"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-natgtw_publicip"</span>
    <span class="hljs-string">allocation_method</span> <span class="hljs-string">=</span> <span class="hljs-string">"Static"</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">sku</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard"</span>

}
<span class="hljs-comment"># Create NAT Gateway</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_nat_gateway"</span> <span class="hljs-string">"nat_gateway_appsnet"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-app-svc-nat-gateway"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
}

<span class="hljs-comment"># Associate NAT gateway and public ip</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_nat_gateway_public_ip_association"</span> <span class="hljs-string">"associate_natgtw_publicip"</span> {
    <span class="hljs-string">nat_gateway_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_nat_gateway.nat_gateway_appsnet.id</span>
    <span class="hljs-string">public_ip_address_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_public_ip.nat_gateway_publicip.id</span>

}
<span class="hljs-comment"># Associate App Subnet and Azure NAT Gateway</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet_nat_gateway_association"</span> <span class="hljs-string">"associate_natgtw_app_snet"</span>{
    <span class="hljs-string">subnet_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.app_subnet.id</span>
    <span class="hljs-string">nat_gateway_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_nat_gateway.nat_gateway_appsnet.id</span>

}
</code></pre>
<h1 id="heading-app-vmss-amp-nsg">App VMSS &amp; NSG</h1>
<p>In this section, we’ll set up the <strong>App Virtual Machine Scale Set (VMSS)</strong> similarly to the Web VMSS configuration. This App VMSS will be equipped with a dedicated <strong>Network Security Group (NSG)</strong> to define precise inbound and outbound traffic rules, enhancing network security for application resources.</p>
<p>The App VMSS will then be fronted by an <strong>internal load balancer</strong>, ensuring that internal traffic is efficiently managed. Additionally, it will be supported by a <strong>NAT Gateway</strong> to handle secure public connections, while still restricting direct access to the application subnet.</p>
<h3 id="heading-nsg-resource">NSG Resource</h3>
<pre><code class="lang-yaml"><span class="hljs-comment"># Create NSG using Terraform Dynamic Block</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_group"</span> <span class="hljs-string">"app_vmss_nsg"</span> {
  <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-app-vmss-nsg"</span>
  <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
  <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
  <span class="hljs-string">dynamic</span> <span class="hljs-string">"security_rule"</span> {
    <span class="hljs-string">for_each</span> <span class="hljs-string">=</span> <span class="hljs-string">var.app_vmss_nsg_inbound_ports</span>
    <span class="hljs-string">content</span> {
      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"inbound-rule-${security_rule.key}"</span>
      <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Inbound-rule-${security_rule.key}"</span>
      <span class="hljs-string">priority</span> <span class="hljs-string">=</span> <span class="hljs-string">sum(</span>[<span class="hljs-number">100</span>,<span class="hljs-string">security_rule.key</span>]<span class="hljs-string">)</span>
      <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Inbound"</span>
      <span class="hljs-string">access</span> <span class="hljs-string">=</span> <span class="hljs-string">"Allow"</span>
      <span class="hljs-string">protocol</span> <span class="hljs-string">=</span> <span class="hljs-string">"Tcp"</span>
      <span class="hljs-string">source_port_range</span> <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
      <span class="hljs-string">destination_port_range</span> <span class="hljs-string">=</span> <span class="hljs-string">security_rule.value</span>
      <span class="hljs-string">source_address_prefix</span> <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
      <span class="hljs-string">destination_address_prefix</span> <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
    }

  }
}
</code></pre>
<ul>
<li><p><strong>NSG Resource</strong>:</p>
<ul>
<li>The <code>azurerm_network_security_group</code> resource creates the NSG, named based on a specified prefix and applied to the App VMSS. It is located in the same resource group and region as defined in the configuration.</li>
</ul>
</li>
<li><p><strong>Dynamic Inbound Rules</strong>:</p>
<ul>
<li><p>The <code>dynamic "security_rule"</code> block generates multiple inbound rules by iterating over <a target="_blank" href="http://var.app"><code>var.app</code></a><code>_vmss_nsg_inbound_ports</code>.</p>
</li>
<li><p>Each rule allows <strong>TCP traffic</strong> on specific ports defined in <code>app_vmss_nsg_inbound_ports</code>.</p>
</li>
<li><p>The rule configuration includes:</p>
<ul>
<li><p><strong>Name</strong> and <strong>Description</strong>: Generated dynamically with a unique identifier for each rule.</p>
</li>
<li><p><strong>Priority</strong>: Calculated based on the rule's index, ensuring proper order without conflicts.</p>
</li>
<li><p><strong>Access</strong>: All rules are set to “Allow.”</p>
</li>
<li><p><strong>Source/Destination Port and Address Ranges</strong>: Allows traffic from any source to any destination on the specified ports.</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<h3 id="heading-app-vmss-resource">App VMSS Resource</h3>
<pre><code class="lang-yaml"><span class="hljs-comment"># Resource VMSS(Virtual machine scale sets)</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_linux_virtual_machine_scale_set"</span> <span class="hljs-string">"app_vmss"</span> {
  <span class="hljs-string">name</span>                 <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-app-vmss"</span>
  <span class="hljs-string">admin_username</span>       <span class="hljs-string">=</span> <span class="hljs-string">var.linux_admin_username</span>
  <span class="hljs-string">computer_name_prefix</span> <span class="hljs-string">=</span> <span class="hljs-string">"vmss-app"</span>
  <span class="hljs-string">location</span>             <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
  <span class="hljs-string">sku</span>                  <span class="hljs-string">=</span> <span class="hljs-string">"Standard_DS1_v2"</span>
  <span class="hljs-string">resource_group_name</span>  <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

  <span class="hljs-string">instances</span> <span class="hljs-string">=</span> <span class="hljs-number">2</span> <span class="hljs-comment"># Manually defining the number of instances. Hence, it is called manual scaling</span>

  <span class="hljs-string">upgrade_mode</span> <span class="hljs-string">=</span> <span class="hljs-string">"Automatic"</span> <span class="hljs-comment"># VMs will be auto upgraded after associated with LB</span>

  <span class="hljs-string">admin_ssh_key</span> {
    <span class="hljs-string">username</span>   <span class="hljs-string">=</span> <span class="hljs-string">var.linux_admin_username</span>
    <span class="hljs-string">public_key</span> <span class="hljs-string">=</span> <span class="hljs-string">file("$</span>{<span class="hljs-string">path.module</span>}<span class="hljs-string">/ssh-keys/terraform-azure.pub")</span>

  }
  <span class="hljs-string">os_disk</span> {
    <span class="hljs-string">caching</span>              <span class="hljs-string">=</span> <span class="hljs-string">"ReadWrite"</span>
    <span class="hljs-string">storage_account_type</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard_LRS"</span>
  }
  <span class="hljs-string">source_image_reference</span> {
    <span class="hljs-string">publisher</span> <span class="hljs-string">=</span> <span class="hljs-string">"RedHat"</span>
    <span class="hljs-string">offer</span>     <span class="hljs-string">=</span> <span class="hljs-string">"RHEL"</span>
    <span class="hljs-string">sku</span>       <span class="hljs-string">=</span> <span class="hljs-string">"83-gen2"</span>
    <span class="hljs-string">version</span>   <span class="hljs-string">=</span> <span class="hljs-string">"latest"</span>
  }
  <span class="hljs-string">network_interface</span> {
    <span class="hljs-string">name</span>                      <span class="hljs-string">=</span> <span class="hljs-string">"vmss-nic"</span>
    <span class="hljs-string">primary</span>                   <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
    <span class="hljs-string">network_security_group_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.app_vmss_nsg.id</span>

    <span class="hljs-string">ip_configuration</span> {
      <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"internal"</span>
      <span class="hljs-string">primary</span>   <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
      <span class="hljs-string">subnet_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.app_subnet.id</span>
      <span class="hljs-comment"># here the VMSS is now associated with Internal LoadBalancer.</span>
      <span class="hljs-string">load_balancer_backend_address_pool_ids</span> <span class="hljs-string">=</span> [<span class="hljs-string">azurerm_lb_backend_address_pool.app_internal_lb_address_pool.id</span>]
    }

  }
  <span class="hljs-string">custom_data</span> <span class="hljs-string">=</span> <span class="hljs-string">filebase64("$</span>{<span class="hljs-string">path.module</span>}<span class="hljs-string">/app-script/appvm-script.sh")</span> <span class="hljs-comment"># One way of passing custom script</span>


}
</code></pre>
<ul>
<li><p><strong>VMSS Configuration</strong>:</p>
<ul>
<li><p>Creates a VM scale set named after a defined prefix for application VMs.</p>
</li>
<li><p>Sets a specific SKU (<code>Standard_DS1_v2</code>) and deploys <strong>2 VM instances</strong> (manual scaling).</p>
</li>
<li><p>Configures <strong>automatic upgrade mode</strong>, allowing VMs to update automatically when changes are applied.</p>
</li>
</ul>
</li>
<li><p><strong>Admin Configuration</strong>:</p>
<ul>
<li>Sets the <strong>administrator username</strong> and SSH key for secure access. The public key is read from the specified path for the VMSS.</li>
</ul>
</li>
<li><p><strong>OS Disk and Image</strong>:</p>
<ul>
<li><p>Configures the OS disk with <code>Standard_LRS</code> storage and sets caching to <code>ReadWrite</code>.</p>
</li>
<li><p>Uses the <strong>Red Hat Enterprise Linux (RHEL) image</strong>, specifying the publisher, offer, SKU, and version.</p>
</li>
</ul>
</li>
<li><p><strong>Network Configuration</strong>:</p>
<ul>
<li><p>Attaches each VM to the <strong>App subnet</strong> and associates it with an <strong>Internal Load Balancer</strong> for internal traffic handling.</p>
</li>
<li><p>Associates the <strong>NSG (Network Security Group)</strong> for controlled inbound and outbound traffic to the application layer.</p>
</li>
</ul>
</li>
<li><p><strong>Custom Data</strong>:</p>
<ul>
<li>Uses the <code>custom_data</code> field to pass initialization scripts to each VM instance, which can be used to configure applications or services during the VM setup.</li>
</ul>
</li>
</ul>
<h3 id="heading-auto-scaling-profile">Auto-Scaling Profile</h3>
<p>Configuring an autoscaling profile is essential for managing load and ensuring optimal performance in Virtual Machine Scale Sets (VMSS). The following Terraform code creates an autoscaling profile for VMSS, targeting both CPU and memory usage as scaling triggers</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_monitor_autoscale_setting"</span> <span class="hljs-string">"app_vmss_autoscale"</span> {
  <span class="hljs-string">name</span>                <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-app-vmss-autoscale-profiles"</span>
  <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
  <span class="hljs-string">location</span>            <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
  <span class="hljs-string">target_resource_id</span>  <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.app_vmss.id</span>

  <span class="hljs-string">profile</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"default"</span>
    <span class="hljs-comment"># Capacity Block</span>
    <span class="hljs-string">capacity</span> {
      <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-number">2</span>
      <span class="hljs-string">minimum</span> <span class="hljs-string">=</span> <span class="hljs-number">2</span>
      <span class="hljs-string">maximum</span> <span class="hljs-string">=</span> <span class="hljs-number">6</span>
    }
    <span class="hljs-comment">###########  START: Percentage CPU Metric Rules  ###########    </span>
    <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Percentage CPU"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.app_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">75</span>
      }
    }

    <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Percentage CPU"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.app_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">25</span>
      }
    }
    <span class="hljs-comment">###########  END: Percentage CPU Metric Rules   ###########    </span>

    <span class="hljs-comment">###########  START: Available Memory Bytes Metric Rules  ###########    </span>
    <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Available Memory Bytes"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.app_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">1073741824</span> <span class="hljs-comment"># Increase 1 VM when Memory In Bytes is less than 1GB</span>
      }
    }

    <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Available Memory Bytes"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.app_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">2147483648</span> <span class="hljs-comment"># Decrease 1 VM when Memory In Bytes is Greater than 2GB</span>
      }
    }
    <span class="hljs-comment">###########  END: Available Memory Bytes Metric Rules  ###########  </span>
  } <span class="hljs-comment"># End of Profile-1</span>
}
</code></pre>
<ul>
<li><p><strong>Resource Definition</strong>:</p>
<ul>
<li><p><strong>Resource</strong>: <code>azurerm_monitor_autoscale_setting</code></p>
</li>
<li><p><strong>Name</strong>: Configured with a prefix for easy identification as <code>app-vmss-autoscale-profiles</code></p>
</li>
<li><p><strong>Target Resource</strong>: Associates with the VMSS for the application layer using <code>target_resource_id</code></p>
</li>
</ul>
</li>
<li><p><strong>Profile Specifications</strong>:</p>
<ul>
<li><p><strong>Capacity Block</strong>:</p>
<ul>
<li><p><strong>Default</strong>: 2 instances</p>
</li>
<li><p><strong>Minimum</strong>: 2 instances</p>
</li>
<li><p><strong>Maximum</strong>: 6 instances (allowing the VMSS to scale between these limits)</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Metric-Based Scaling Rules</strong>:</p>
<ul>
<li><p><strong>CPU Usage (Percentage CPU)</strong>:</p>
<ul>
<li><p><strong>Scale-Out</strong>: Adds one instance when the average CPU exceeds 75% over a 5-minute window.</p>
</li>
<li><p><strong>Scale-In</strong>: Removes one instance when the average CPU falls below 25% over a 5-minute window.</p>
</li>
</ul>
</li>
<li><p><strong>Memory Usage (Available Memory Bytes)</strong>:</p>
<ul>
<li><p><strong>Scale-Out</strong>: Adds one instance when available memory is below 1GB.</p>
</li>
<li><p><strong>Scale-In</strong>: Removes one instance when available memory exceeds 2GB.</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<h1 id="heading-provision-of-internal-load-balancer">Provision of Internal Load Balancer</h1>
<p>This section covers the setup of an internal load balancer for the Application Virtual Machine Scale Set (App VMSS). By routing traffic through the App Subnet and fronting it with a NAT gateway, this configuration enhances the security and management of internal network traffic.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_lb"</span> <span class="hljs-string">"app_internal_lb"</span> {
  <span class="hljs-string">name</span>                <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-app-internal-lb"</span>
  <span class="hljs-string">location</span>            <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
  <span class="hljs-string">sku</span>                 <span class="hljs-string">=</span> <span class="hljs-string">"Standard"</span>
  <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
  <span class="hljs-string">frontend_ip_configuration</span> {
    <span class="hljs-string">name</span>                          <span class="hljs-string">=</span> <span class="hljs-string">"app-lb-privateip-1"</span>
    <span class="hljs-string">subnet_id</span>                     <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.app_subnet.id</span>
    <span class="hljs-string">private_ip_address_allocation</span> <span class="hljs-string">=</span> <span class="hljs-string">"Static"</span>
    <span class="hljs-string">private_ip_address</span>            <span class="hljs-string">=</span> <span class="hljs-string">"10.1.11.241"</span>
    <span class="hljs-string">private_ip_address_version</span>    <span class="hljs-string">=</span> <span class="hljs-string">"IPv4"</span>

  }

}
<span class="hljs-comment"># Create backen pool</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_lb_backend_address_pool"</span> <span class="hljs-string">"app_internal_lb_address_pool"</span> {
  <span class="hljs-string">name</span>            <span class="hljs-string">=</span> <span class="hljs-string">"app-backend"</span>
  <span class="hljs-string">loadbalancer_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.app_internal_lb.id</span>

}
<span class="hljs-comment"># Create LB Probe</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_lb_probe"</span> <span class="hljs-string">"app_internal_lb_probe"</span> {
  <span class="hljs-string">name</span>            <span class="hljs-string">=</span> <span class="hljs-string">"tcp-probe"</span>
  <span class="hljs-string">protocol</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Tcp"</span>
  <span class="hljs-string">port</span>            <span class="hljs-string">=</span> <span class="hljs-number">80</span>
  <span class="hljs-string">loadbalancer_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.app_internal_lb.id</span>

}
<span class="hljs-comment"># Create LB Rule</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_lb_rule"</span> <span class="hljs-string">"app_internal_lb_rule"</span> {
  <span class="hljs-string">name</span>                           <span class="hljs-string">=</span> <span class="hljs-string">"app-app1-rule"</span>
  <span class="hljs-string">protocol</span>                       <span class="hljs-string">=</span> <span class="hljs-string">"Tcp"</span>
  <span class="hljs-string">backend_port</span>                   <span class="hljs-string">=</span> <span class="hljs-number">80</span>
  <span class="hljs-string">frontend_port</span>                  <span class="hljs-string">=</span> <span class="hljs-number">80</span>
  <span class="hljs-string">frontend_ip_configuration_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.app_internal_lb.frontend_ip_configuration</span>[<span class="hljs-number">0</span>]<span class="hljs-string">.name</span>
  <span class="hljs-string">backend_address_pool_ids</span>       <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb_backend_address_pool.app_internal_lb_address_pool.id</span>
  <span class="hljs-string">probe_id</span>                       <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb_probe.app_internal_lb_probe.id</span>
  <span class="hljs-string">loadbalancer_id</span>                <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.app_internal_lb.id</span>

}
</code></pre>
<h3 id="heading-resource-breakdown">Resource Breakdown:</h3>
<ol>
<li><p><strong>Internal Load Balancer</strong>:</p>
<ul>
<li><p><strong>Resource</strong>: <code>azurerm_lb</code></p>
</li>
<li><p><strong>Configuration</strong>:</p>
<ul>
<li><p><strong>Name</strong>: Specifies a unique identifier with a custom prefix.</p>
</li>
<li><p><strong>Frontend IP</strong>: Sets up a private static IP within the App Subnet for internal-only access.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Backend Pool</strong>:</p>
<ul>
<li><p><strong>Resource</strong>: <code>azurerm_lb_backend_address_pool</code></p>
</li>
<li><p><strong>Configuration</strong>:</p>
<ul>
<li>Connects the App VMSS instances to the load balancer.</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Health Probe</strong>:</p>
<ul>
<li><p><strong>Resource</strong>: <code>azurerm_lb_probe</code></p>
</li>
<li><p><strong>Configuration</strong>:</p>
<ul>
<li><p><strong>Protocol</strong>: TCP</p>
</li>
<li><p><strong>Port</strong>: 80, enabling traffic health checks for connected instances.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Load Balancer Rule</strong>:</p>
<ul>
<li><p><strong>Resource</strong>: <code>azurerm_lb_rule</code></p>
</li>
<li><p><strong>Configuration</strong>:</p>
<ul>
<li>Routes traffic on TCP port 80, forwarding it from the frontend IP to the backend pool with health checks provided by the probe.</li>
</ul>
</li>
</ul>
</li>
</ol>
<h1 id="heading-resource-verification-on-azure-portal">Resource Verification on Azure Portal</h1>
<h3 id="heading-storage-account">Storage Account</h3>
<p>The designated storage account has been successfully provisioned with a blob container that includes the <code>app.conf</code> file, which has been correctly uploaded as expected.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730388858245/6ed5d817-1395-448f-9209-22a8ef1b7e5a.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-nat-gateway">NAT Gateway</h3>
<p>The NAT Gateway has been deployed and is now associated with the application subnet.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730388167463/7635e405-0d0e-4f97-bec8-234a63f93648.png" alt class="image--center mx-auto" /></p>
<p>Additionally, the outbound IP configuration has been set up with the specified public IP, enabling external communication for resources within the App subnet.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730388274925/076cb439-498e-4217-ad3a-6e0ff10caf98.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-app-vmss">APP VMSS</h3>
<p>The App VMSS has been successfully deployed within the application subnet, as highlighted in the configuration.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730387800121/4eaeed28-3078-42da-90d3-9b5faee3a41b.png" alt class="image--center mx-auto" /></p>
<p>A scaling policy has been applied to manage resources efficiently based on demand.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730387856424/189ecf49-5e44-4b92-b7c4-a62d6bb47847.png" alt class="image--center mx-auto" /></p>
<p>Custom networking for the App VMSS has also been configured to support secure inbound and outbound traffic, layered on top of the default Network Security Group (NSG) settings for the App subnet.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730388015818/9a901eac-07cf-4d30-98ef-723399d2e661.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-load-balancers">Load Balancers</h3>
<p>An internal load balancer has been provisioned, with backend pools configured and associated for optimal load distribution.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730388592447/fc5d88a4-47f4-45d2-85d6-35e60b8faad8.png" alt class="image--center mx-auto" /></p>
<p>The frontend IP has been sourced directly from the App subnet, ensuring seamless connectivity and traffic routing within the internal network.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730388678174/278d1954-992f-4678-8ebe-acf36a555c29.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Simplifying Auto-Scaling with Terraform: Deploy VMSS Efficiently]]></title><description><![CDATA[Azure Virtual Machine Scale Sets (VMSS) – a powerful Azure service designed to automatically manage, scale, and balance multiple virtual machines to support varying workloads. Azure VMSS is a service that allows you to deploy and manage a set of iden...]]></description><link>https://www.devopswithritesh.in/simplifying-auto-scaling-with-terraform-deploy-vmss-efficiently</link><guid isPermaLink="true">https://www.devopswithritesh.in/simplifying-auto-scaling-with-terraform-deploy-vmss-efficiently</guid><category><![CDATA[Devops]]></category><category><![CDATA[TerraformwithAzure]]></category><category><![CDATA[autoscaling]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Create a VM Scale Set (VMSS) from Existing VM image stored Compute Gallery]]></category><category><![CDATA[#Azure #CloudComputing #DevOps #WebDevelopment #Tech #Scalability #AppServices #VMSS #GitHub #LinkedInLearning]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Sun, 27 Oct 2024 09:54:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729925725605/3217717f-1ca1-4c55-94ed-a442c7c354fc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Azure Virtual Machine Scale Sets (VMSS)</strong> – a powerful Azure service designed to automatically manage, scale, and balance multiple virtual machines to support varying workloads. Azure VMSS is a service that allows you to deploy and manage a set of identical virtual machines (VMs) that are grouped. These VMs are automatically distributed across fault and update domains, ensuring high availability. VMSS is ideal for situations where you need to support varying application loads without manual intervention, offering a seamless approach to scale-out (adding instances) and scale-in (removing instances) operations.</p>
<p>We’ll explore the deployment of VMSS in a web-tier environment using Terraform, configuring both manual and autoscaling options to illustrate the flexibility and control VMSS brings to cloud applications.</p>
<h1 id="heading-manual-scaling-vs-auto-scaling">Manual Scaling vs Auto-Scaling</h1>
<ul>
<li><p><strong>Manual Scaling</strong>: With manual scaling, you set a predefined instance count for the scale set. VMSS will then deploy that specific number of VMs, which is ideal for consistent, predictable workloads. This allows for precise resource planning and control over your deployment size.</p>
</li>
<li><p><strong>Autoscaling</strong>: In autoscaling, VMSS dynamically adjusts the number of VM instances based on specific metrics such as CPU utilization, memory usage, or custom metrics defined by the application. This enables you to optimize costs by only using the resources you need when demand is high, then automatically scaling down during low-demand periods.</p>
</li>
</ul>
<h1 id="heading-manual-scaling-vmss">Manual Scaling - VMSS</h1>
<p>In this section, we’ll dive into <strong>manual scaling with Azure Virtual Machine Scale Set (VMSS)</strong>, where you explicitly define the number of virtual machines (VMs) the scale set should deploy. Here, instead of provisioning individual Linux VMs for our web tier, we will create a VMSS configured to spin up <strong>two VMs</strong>.</p>
<p>Once the VMSS is deployed, it will be paired with an <strong>Azure Standard Load Balancer</strong>, which will direct incoming traffic to the backend pool associated with the VM instances. This setup ensures that traffic is evenly distributed across the VMs in the scale set, optimizing availability and reliability for web applications.</p>
<p>By defining the VM count in advance, we take full advantage of VMSS to manage scalability without the need to manage individual VM resources.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Resource VMSS(Virtual machine scale sets)</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_linux_virtual_machine_scale_set"</span> <span class="hljs-string">"web_vmss"</span> {
  <span class="hljs-string">name</span>     <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-web-vmss"</span>
  <span class="hljs-string">admin_username</span> <span class="hljs-string">=</span> <span class="hljs-string">var.linux_admin_username</span>
  <span class="hljs-string">computer_name_prefix</span> <span class="hljs-string">=</span> <span class="hljs-string">"vmss-web"</span>
  <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
  <span class="hljs-string">sku</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard_DS1_v2"</span>
  <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

  <span class="hljs-string">instances</span> <span class="hljs-string">=</span> <span class="hljs-number">2</span> <span class="hljs-comment"># Manually defining the number of instances. Hence, it is called manual scaling</span>

  <span class="hljs-string">upgrade_mode</span> <span class="hljs-string">=</span> <span class="hljs-string">"Automatic"</span>  <span class="hljs-comment"># VMs will be auto upgraded after associated with LB</span>

  <span class="hljs-string">admin_ssh_key</span> {
    <span class="hljs-string">username</span>   <span class="hljs-string">=</span> <span class="hljs-string">var.linux_admin_username</span>
    <span class="hljs-string">public_key</span> <span class="hljs-string">=</span> <span class="hljs-string">file("$</span>{<span class="hljs-string">path.module</span>}<span class="hljs-string">/ssh-keys/terraform-azure.pub")</span>

  }
  <span class="hljs-string">os_disk</span> {
    <span class="hljs-string">caching</span>              <span class="hljs-string">=</span> <span class="hljs-string">"ReadWrite"</span>
    <span class="hljs-string">storage_account_type</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard_LRS"</span>
  }
  <span class="hljs-string">source_image_reference</span> {
    <span class="hljs-string">publisher</span> <span class="hljs-string">=</span> <span class="hljs-string">"RedHat"</span>
    <span class="hljs-string">offer</span>     <span class="hljs-string">=</span> <span class="hljs-string">"RHEL"</span>
    <span class="hljs-string">sku</span>       <span class="hljs-string">=</span> <span class="hljs-string">"83-gen2"</span>
    <span class="hljs-string">version</span>   <span class="hljs-string">=</span> <span class="hljs-string">"latest"</span>
  }
  <span class="hljs-string">network_interface</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"vmss-nic"</span>
    <span class="hljs-string">primary</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
    <span class="hljs-string">network_security_group_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.web_vmss_nsg.id</span>

    <span class="hljs-string">ip_configuration</span> {
      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"internal"</span>
      <span class="hljs-string">primary</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
      <span class="hljs-string">subnet_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.web_subnet.id</span>
      <span class="hljs-comment"># here the VMSS is now associated with LoadBalancer.</span>
      <span class="hljs-string">load_balancer_backend_address_pool_ids</span> <span class="hljs-string">=</span> [<span class="hljs-string">azurerm_lb_backend_address_pool.lb_backend_address_pool.id</span>]
    }

  }
  <span class="hljs-string">custom_data</span> <span class="hljs-string">=</span> <span class="hljs-string">filebase64("$</span>{<span class="hljs-string">path.module</span>}<span class="hljs-string">/app-script/webvm.sh")</span> <span class="hljs-comment"># One way of passing custom script</span>


}
</code></pre>
<p>This Terraform configuration creates a <strong>Virtual Machine Scale Set (VMSS)</strong> on Azure for the web tier, with key settings for instance scaling, network configuration, and custom automation.</p>
<ol>
<li><p><strong>VM Scale Set Basics</strong>:</p>
<ul>
<li>A VMSS named <code>web_vmss</code> is defined, set to manually scale to <strong>two instances</strong>. Each instance uses a Linux OS and a specified SKU (<code>Standard_DS1_v2</code>) for consistent performance.</li>
</ul>
</li>
<li><p><strong>Auto-Upgrade &amp; OS Settings</strong>:</p>
<ul>
<li><p>The VMSS is configured to automatically upgrade instances, so any changes apply without manual intervention.</p>
</li>
<li><p>The OS is <strong>Red Hat Enterprise Linux</strong> (RHEL), with settings for disk caching and storage.</p>
</li>
</ul>
</li>
<li><p><strong>Network Configuration</strong>:</p>
<ul>
<li><p>Each VM in the scale set connects to an existing <strong>subnet</strong> and is secured by a Network Security Group (NSG).</p>
</li>
<li><p>It also associates with an <strong>Azure Load Balancer</strong> backend pool, distributing traffic across VMs for better reliability and load management.</p>
</li>
</ul>
</li>
<li><p><strong>Custom Initialization Script</strong>:</p>
<ul>
<li>A custom script (<a target="_blank" href="http://webvm.sh"><code>webvm.sh</code></a>) initializes each VM instance with the required settings or software. The script is encoded in base64 for direct use in Terraform.</li>
</ul>
</li>
</ol>
<p>This setup creates a flexible and scalable backend for applications, allowing easy scaling, security, and traffic distribution across multiple instances in the web tier.</p>
<h2 id="heading-resource-verification-on-azure-portal-post-apply">Resource Verification on Azure Portal post Apply</h2>
<p>After applying the Terraform configuration, the <strong>Virtual Machine Scale Set (VMSS)</strong> reached the desired state, with two virtual machines successfully up and running.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729956026058/4b776e82-d52f-4d7a-a8dd-4b9aff32df2c.png" alt class="image--center mx-auto" /></p>
<p>As discussed, <strong>manual scaling</strong> has been implemented here, where the number of instances is explicitly defined to maintain consistency and control over resources.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729957201686/af81c5d4-208a-4130-9539-6d1a10086447.png" alt class="image--center mx-auto" /></p>
<p>Additionally, the <strong>VMSS is linked to the Azure Load Balancer</strong>, effectively distributing traffic between the instances. This setup enables access to the VMs through the Load Balancer's public IP, enhancing availability and reliability for end-users.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729956183225/06c453fd-5e53-4be9-bf5a-fdc25d38e942.png" alt class="image--center mx-auto" /></p>
<p>Finally, here is the <strong>content served by the two VMs</strong> managed by the VMSS, showcasing the setup's functionality and the seamless distribution across instances.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729956290838/28e387fc-cbe3-4564-9a57-e0a5fc936e49.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-auto-scaling-in-azure-vmss">Auto-Scaling in Azure VMSS</h1>
<p><strong>Auto scaling</strong> in Azure is a feature that dynamically adjusts the number of resources—such as virtual machines (VMs) in a scale set or App Service instances—based on real-time demand and pre-defined criteria. Auto scaling helps optimize costs and ensures application availability by automatically scaling out to handle high loads or scaling in to reduce resources during low demand periods.</p>
<p>Auto-scaling is a stand-alone resource in Azure that can be attached to VMSS or even to app services and other resources. Here in this section, we’ll implement Auto-Scaling with VMSS</p>
<h2 id="heading-auto-scaling-profiles">Auto-Scaling Profiles</h2>
<p>There are 3 auto-scaling profiles available in Azure</p>
<ul>
<li><h3 id="heading-auto-scaling-default-profile">Auto-Scaling Default Profile</h3>
<p>  This profile allows you to configure automatic scaling based on performance metrics. It adjusts the number of instances in response to real-time resource usage, ensuring that your application can handle fluctuating traffic without manual intervention.</p>
</li>
<li><h3 id="heading-auto-scaling-recurrence-profile">Auto-Scaling Recurrence Profile</h3>
<p>  With this profile, you can set up a schedule for scaling your resources at specific times. This is particularly useful for applications that experience predictable traffic patterns, allowing you to allocate resources efficiently during peak hours and scale down during off-peak times.</p>
</li>
<li><h3 id="heading-auto-scaling-fixed-profile">Auto-Scaling Fixed Profile</h3>
<p>  This profile enables you to define a static number of instances for your application. It provides a consistent level of resources, making it ideal for workloads that require steady performance without the need for scaling based on demand.</p>
</li>
</ul>
<h2 id="heading-auto-scaling-rules-metrics">Auto-Scaling Rules Metrics</h2>
<p>Below are the metric rules we can create while setting up profiles in Autoscaling.</p>
<pre><code class="lang-yaml"><span class="hljs-number">1</span><span class="hljs-string">.</span> <span class="hljs-string">Percentage</span> <span class="hljs-string">CPU</span> <span class="hljs-string">Metric</span> <span class="hljs-string">Rules</span>
    <span class="hljs-attr">1. Scale-Up Rule:</span> <span class="hljs-string">Increase</span> <span class="hljs-string">VMs</span> <span class="hljs-string">by</span> <span class="hljs-number">1</span> <span class="hljs-string">when</span> <span class="hljs-string">CPU</span> <span class="hljs-string">usage</span> <span class="hljs-string">is</span> <span class="hljs-string">greater</span> <span class="hljs-string">than</span> <span class="hljs-number">75</span><span class="hljs-string">%</span>
    <span class="hljs-attr">2. Scale-In Rule:</span> <span class="hljs-string">Decrease</span> <span class="hljs-string">VMs</span> <span class="hljs-string">by</span> <span class="hljs-string">1when</span> <span class="hljs-string">CPU</span> <span class="hljs-string">usage</span> <span class="hljs-string">is</span> <span class="hljs-string">lower</span> <span class="hljs-string">than</span> <span class="hljs-number">25</span><span class="hljs-string">%</span>
<span class="hljs-number">2</span><span class="hljs-string">.</span> <span class="hljs-string">Available</span> <span class="hljs-string">Memory</span> <span class="hljs-string">Bytes</span> <span class="hljs-string">Metric</span> <span class="hljs-string">Rules</span>
    <span class="hljs-attr">1. Scale-Up Rule:</span> <span class="hljs-string">Increase</span> <span class="hljs-string">VMs</span> <span class="hljs-string">by</span> <span class="hljs-number">1</span> <span class="hljs-string">when</span> <span class="hljs-string">Available</span> <span class="hljs-string">Memory</span> <span class="hljs-string">Bytes</span> <span class="hljs-string">is</span> <span class="hljs-string">less</span> <span class="hljs-string">than</span> <span class="hljs-string">1GB</span> <span class="hljs-string">in</span> <span class="hljs-string">bytes</span>
    <span class="hljs-attr">2. Scale-In Rule:</span> <span class="hljs-string">Decrease</span> <span class="hljs-string">VMs</span> <span class="hljs-string">by</span> <span class="hljs-number">1</span> <span class="hljs-string">when</span> <span class="hljs-string">Available</span> <span class="hljs-string">Memory</span> <span class="hljs-string">Bytes</span> <span class="hljs-string">is</span> <span class="hljs-string">greater</span> <span class="hljs-string">than</span> <span class="hljs-string">2GB</span> <span class="hljs-string">in</span> <span class="hljs-string">bytes</span>
<span class="hljs-number">3</span><span class="hljs-string">.</span> <span class="hljs-string">LB</span> <span class="hljs-string">SYN</span> <span class="hljs-string">Count</span> <span class="hljs-string">Metric</span> <span class="hljs-string">Rules</span> <span class="hljs-string">(JUST</span> <span class="hljs-string">FOR</span> <span class="hljs-string">firing</span> <span class="hljs-string">Scale-Up</span> <span class="hljs-string">and</span> <span class="hljs-string">Scale-In</span> <span class="hljs-string">Events</span> <span class="hljs-string">for</span> <span class="hljs-string">Testing</span> <span class="hljs-string">and</span> <span class="hljs-string">also</span> <span class="hljs-string">knowing</span> <span class="hljs-string">in</span> <span class="hljs-string">addition</span> <span class="hljs-string">to</span> <span class="hljs-string">current</span> <span class="hljs-string">VMSS</span> <span class="hljs-string">Resource,</span> <span class="hljs-string">we</span> <span class="hljs-string">can</span> <span class="hljs-string">also</span> <span class="hljs-string">create</span> <span class="hljs-string">Autoscaling</span> <span class="hljs-string">rules</span> <span class="hljs-string">for</span> <span class="hljs-string">VMSS</span> <span class="hljs-string">based</span> <span class="hljs-string">on</span> <span class="hljs-string">other</span> <span class="hljs-string">Resource</span> <span class="hljs-string">usage</span> <span class="hljs-string">like</span> <span class="hljs-string">Load</span> <span class="hljs-string">Balancer)</span>
    <span class="hljs-attr">1. Scale-Up Rule:</span> <span class="hljs-string">Increase</span> <span class="hljs-string">VMs</span> <span class="hljs-string">by</span> <span class="hljs-number">1</span> <span class="hljs-string">when</span> <span class="hljs-string">LB</span> <span class="hljs-string">SYN</span> <span class="hljs-string">Count</span> <span class="hljs-string">is</span> <span class="hljs-string">greater</span> <span class="hljs-string">than</span> <span class="hljs-number">10</span> <span class="hljs-string">Connections</span> <span class="hljs-string">(Average)</span>
    <span class="hljs-attr">2. Scale-Up Rule:</span> <span class="hljs-string">Decrease</span> <span class="hljs-string">VMs</span> <span class="hljs-string">by</span> <span class="hljs-number">1</span> <span class="hljs-string">when</span> <span class="hljs-string">LB</span> <span class="hljs-string">SYN</span> <span class="hljs-string">Count</span> <span class="hljs-string">is</span> <span class="hljs-string">less</span> <span class="hljs-string">than</span> <span class="hljs-number">10</span> <span class="hljs-string">Connections</span> <span class="hljs-string">(Average)</span>
</code></pre>
<h1 id="heading-auto-scaling-profile-creation">Auto-Scaling Profile Creation</h1>
<p>By leveraging the <code>azurerm_monitor_autoscale_setting</code> resource, we will establish a flexible scaling strategy based on several key metrics. Here we will create all 3 types of profiles as discussed above one by one.</p>
<h2 id="heading-profile-1-default-profile">Profile-1: Default Profile</h2>
<ul>
<li><p><strong>Introduction to VMSS Auto-Scaling</strong>: An overview of Azure auto-scaling as a means to dynamically adjust the number of virtual machines (VMs) based on real-time resource utilization, traffic patterns, and specific performance <strong>thresholds</strong>.</p>
</li>
<li><p><strong>Creating the Auto-Scale Resource</strong>:</p>
<ul>
<li><strong>Notification Block</strong>: Setting up automatic email notifications for subscription administrators and co-administrators to keep them informed of scale actions.</li>
</ul>
</li>
<li><p><strong>Defining the Default Scaling Profile</strong>:</p>
<ul>
<li><p><strong>Manual Scaling Capacity</strong>: Initializing with a default of 2 instances and setting bounds for minimum (2) and maximum (6) instances, providing flexibility as demand changes.</p>
</li>
<li><p><strong>CPU Usage-Based Scaling Rules</strong>: Configuring scale-in and scale-out actions based on CPU utilization:</p>
<ul>
<li><p>Scale-out is triggered if CPU usage exceeds 75% (adds 1 instance).</p>
</li>
<li><p>Scale-in occurs when CPU usage drops below 25% (removes 1 instance).</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Memory Availability-Based Scaling Rules</strong>:</p>
<ul>
<li><p>Scale-out adds an instance if available memory is under 1GB.</p>
</li>
<li><p>Scale-in removes an instance when available memory exceeds 2GB.</p>
</li>
</ul>
</li>
<li><p><strong>Load Balancer SYN Count Rules</strong>:</p>
<ul>
<li><p>Testing scale actions by monitoring SYN requests to the Load Balancer.</p>
</li>
<li><p>Scaling actions are triggered based on incoming requests, demonstrating how demand-based metrics can trigger VMSS scaling.</p>
</li>
</ul>
</li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_monitor_autoscale_setting"</span> <span class="hljs-string">"web_vmss_autoscale"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-web-vmss-autoscale-profiles"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">target_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>

    <span class="hljs-comment">#### Notification Block#####</span>
    <span class="hljs-string">notification</span> {
      <span class="hljs-string">email</span> {
        <span class="hljs-string">send_to_subscription_administrator</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
        <span class="hljs-string">send_to_subscription_co_administrator</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>

      }
    }
    <span class="hljs-comment">#### Profile-1( Default Profile Block ) #####</span>
    <span class="hljs-string">profile</span> {
      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"Default Profile"</span>
      <span class="hljs-string">capacity</span> {
        <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-number">2</span>
        <span class="hljs-string">minimum</span> <span class="hljs-string">=</span> <span class="hljs-number">2</span>
        <span class="hljs-string">maximum</span> <span class="hljs-string">=</span> <span class="hljs-number">6</span>
      }

    <span class="hljs-comment">### Percentage CPU Metric Rule Begins #####</span>
    <span class="hljs-comment">### Scale-Out Rule  ####</span>
    <span class="hljs-string">rule</span> {
    <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span> <span class="hljs-comment"># Scale-out means always increase</span>
        <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span> <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
    }
    <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"Percentage CPU"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>
        <span class="hljs-string">time_grain</span> <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span> <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span> <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span> <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span> <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span> <span class="hljs-string">=</span> <span class="hljs-number">75</span>
        }
    }

    <span class="hljs-comment">### Scale-In Rule ####</span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span> <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"Percentage CPU"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>
        <span class="hljs-string">time_grain</span> <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span> <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span> <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span> <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span> <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span> <span class="hljs-string">=</span> <span class="hljs-number">25</span>
      }
    }

    <span class="hljs-comment">### END OF Percentage CPU Metric Rule ###</span>

  <span class="hljs-comment">###########  START: Available Memory Bytes Metric Rules  ###########    </span>
  <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }            
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Available Memory Bytes"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>        
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">1073741824</span> <span class="hljs-comment"># Increase 1 VM when Memory In Bytes is less than 1GB</span>
      }
    }
    <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }        
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Available Memory Bytes"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>                
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">2147483648</span> <span class="hljs-comment"># Decrease 1 VM when Memory In Bytes is Greater than 2GB</span>
      }
    }
    <span class="hljs-comment">###########  END: Available Memory Bytes Metric Rules  ###########  </span>

  <span class="hljs-comment">###########  START: LB SYN Count Metric Rules - Just to Test scale-in, scale-out  ###########    </span>
  <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }      
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"SYNCount"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.web_lb.id</span> 
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Microsoft.Network/loadBalancers"</span>        
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">10</span> <span class="hljs-comment"># 10 requests to an LB</span>
      }
    }
    <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }      
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"SYNCount"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.web_lb.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Microsoft.Network/loadBalancers"</span>                
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">10</span>
      }
    }
    <span class="hljs-comment">###########  END: LB SYN Count Metric Rules  ###########    </span>

    }   <span class="hljs-comment"># End of Default Profile Block -1</span>


}
</code></pre>
<h3 id="heading-verifying-profile-1-on-azure-portal">Verifying Profile-1 on Azure Portal</h3>
<p>Now after applying, the custom auto-scale has been chosen</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730018751649/31abf745-9d9d-4829-80e2-66929c5167ba.png" alt class="image--center mx-auto" /></p>
<p>The respective rules have been added to the profile as follows</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730018928194/9709f88d-c068-40aa-8e99-b8251fe2887a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-profile-2-recurrence-profile">Profile-2: Recurrence Profile</h2>
<p>A recurrence profile for auto-scaling with settings specific to weekdays (Monday to Friday) and CPU, memory, and load balancer metrics for scaling actions.</p>
<pre><code class="lang-yaml">  <span class="hljs-comment">##### Profile-2: Recurrence Profile - Week Days</span>
  <span class="hljs-string">profile</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"profile-2-weekdays"</span>
  <span class="hljs-comment"># Capacity Block     </span>
    <span class="hljs-string">capacity</span> {
      <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-number">4</span>
      <span class="hljs-string">minimum</span> <span class="hljs-string">=</span> <span class="hljs-number">4</span>
      <span class="hljs-string">maximum</span> <span class="hljs-string">=</span> <span class="hljs-number">20</span>
    }
  <span class="hljs-comment"># Recurrence Block for Week Days (5 days)</span>
    <span class="hljs-string">recurrence</span> {
      <span class="hljs-string">timezone</span> <span class="hljs-string">=</span> <span class="hljs-string">"India Standard Time"</span>
      <span class="hljs-string">days</span> <span class="hljs-string">=</span> [<span class="hljs-string">"Monday"</span>, <span class="hljs-string">"Tuesday"</span>, <span class="hljs-string">"Wednesday"</span>, <span class="hljs-string">"Thursday"</span>, <span class="hljs-string">"Friday"</span>]
      <span class="hljs-string">hours</span> <span class="hljs-string">=</span> [<span class="hljs-number">0</span>]
      <span class="hljs-string">minutes</span> <span class="hljs-string">=</span> [<span class="hljs-number">0</span>]      
    }    
<span class="hljs-comment">###########  START: Percentage CPU Metric Rules  ###########    </span>
  <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }            
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Percentage CPU"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>        
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">75</span>
      }
    }

  <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }        
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Percentage CPU"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>                
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">25</span>
      }
    }
<span class="hljs-comment">###########  END: Percentage CPU Metric Rules   ###########    </span>

<span class="hljs-comment">###########  START: Available Memory Bytes Metric Rules  ###########    </span>
  <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }            
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Available Memory Bytes"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>        
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">1073741824</span> <span class="hljs-comment"># Increase 1 VM when Memory In Bytes is less than 1GB</span>
      }
    }

  <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }        
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Available Memory Bytes"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>                
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">2147483648</span> <span class="hljs-comment"># Decrease 1 VM when Memory In Bytes is Greater than 2GB</span>
      }
    }
<span class="hljs-comment">###########  END: Available Memory Bytes Metric Rules  ###########  </span>


<span class="hljs-comment">###########  START: LB SYN Count Metric Rules - Just to Test scale-in, scale-out  ###########    </span>
  <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }      
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"SYNCount"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.web_lb.id</span> 
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Microsoft.Network/loadBalancers"</span>        
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">10</span> <span class="hljs-comment"># 10 requests to an LB</span>
      }
    }
  <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }      
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"SYNCount"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.web_lb.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Microsoft.Network/loadBalancers"</span>                
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">10</span>
      }
    }
<span class="hljs-comment">###########  END: LB SYN Count Metric Rules  ###########    </span>
  } 
  <span class="hljs-comment">##### End of Profile-2</span>
</code></pre>
<ol>
<li><p><strong>Profile-2 Overview</strong>:</p>
<ul>
<li><p><strong>Name</strong>: Profile-2 is designated as the "profile-2-weekdays."</p>
</li>
<li><p><strong>Capacity Constraints</strong>: Configures the scale set to maintain a minimum of 4 VMs, a maximum of 20, and a default of 4 VMs, adjusting dynamically within these boundaries.</p>
</li>
</ul>
</li>
<li><p><strong>Recurrence Block</strong>:</p>
<ul>
<li><p>Sets the auto-scaling action to apply from <strong>Monday through Friday</strong> at <strong>00:00 hours</strong> in <strong>India Standard Time</strong>.</p>
</li>
<li><p>Ensures scaling actions only trigger during specified weekday hours, optimizing resource allocation based on anticipated load patterns.</p>
</li>
</ul>
</li>
<li><p><strong>Scaling Rules for CPU Usage (Percentage CPU Metric Rules)</strong>:</p>
<ul>
<li><p><strong>Scale-Out Rule</strong>: Increases the instance count by 1 if average CPU usage exceeds 75%.</p>
</li>
<li><p><strong>Scale-In Rule</strong>: Decreases the instance count by 1 if average CPU usage falls below 25%.</p>
</li>
<li><p>Each rule has a 5-minute cooldown period, allowing sufficient time for load adjustment before further scaling.</p>
</li>
</ul>
</li>
<li><p><strong>Scaling Rules for Memory Availability (Available Memory Bytes Metric Rules)</strong>:</p>
<ul>
<li><p><strong>Scale-Out Rule</strong>: Triggers scaling out by adding 1 instance when available memory falls below 1GB.</p>
</li>
<li><p><strong>Scale-In Rule</strong>: Triggers scaling in by removing 1 instance when available memory exceeds 2GB.</p>
</li>
<li><p>This approach ensures resource availability by adding VMs as memory demand increases and reducing VMs when memory is ample.</p>
</li>
</ul>
</li>
<li><p><strong>SYN Count-Based Scaling Rules (Load Balancer Metric Rules)</strong>:</p>
<ul>
<li><p>Monitors load balancer <strong>SYN request count</strong> for scale adjustments, beneficial for high-traffic scenarios.</p>
</li>
<li><p><strong>Scale-Out Rule</strong>: Adds 1 VM if the SYN request count exceeds 10.</p>
</li>
<li><p><strong>Scale-In Rule</strong>: Removes 1 VM if the SYN request count falls below 10, stabilizing resource usage during traffic fluctuations.</p>
</li>
</ul>
</li>
</ol>
<p>Each section within Profile 2 enables adaptive scaling based on real-time traffic and workload patterns, enhancing availability, optimizing costs, and aligning resource usage with demand.</p>
<h3 id="heading-verifying-profile-2-on-azure-portal">Verifying Profile-2 on Azure Portal</h3>
<p>Now post applying the Recurrence profile i.e. profile 2 is now in action where all the rules have been added respectively</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730019761675/f4b4577d-34db-4242-bb79-46e9c1176895.png" alt class="image--center mx-auto" /></p>
<p>While Profile-2 is in effect, it takes precedence, meaning that the Default Profile will not execute concurrently. This ensures that scaling adjustments occur according to the current profile’s configuration, based on weekday and demand metrics, optimizing resource alignment with operational needs.</p>
<h2 id="heading-profile-3-recurrence-profile-weekend">Profile-3: Recurrence Profile (Weekend)</h2>
<p>This profile manages auto-scaling configurations specifically for weekends, ensuring optimal resource allocation during lower usage periods with adjustments triggered by CPU, memory, and load balancer metrics.</p>
<pre><code class="lang-yaml">  <span class="hljs-string">profile</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"profile-3-weekends"</span>
  <span class="hljs-comment"># Capacity Block     </span>
    <span class="hljs-string">capacity</span> {
      <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-number">3</span>
      <span class="hljs-string">minimum</span> <span class="hljs-string">=</span> <span class="hljs-number">3</span>
      <span class="hljs-string">maximum</span> <span class="hljs-string">=</span> <span class="hljs-number">6</span>
    }
  <span class="hljs-comment"># Recurrence Block for Weekends (2 days)</span>
    <span class="hljs-string">recurrence</span> {
      <span class="hljs-string">timezone</span> <span class="hljs-string">=</span> <span class="hljs-string">"India Standard Time"</span>
      <span class="hljs-string">days</span> <span class="hljs-string">=</span> [<span class="hljs-string">"Saturday"</span>, <span class="hljs-string">"Sunday"</span>]
      <span class="hljs-string">hours</span> <span class="hljs-string">=</span> [<span class="hljs-number">0</span>]
      <span class="hljs-string">minutes</span> <span class="hljs-string">=</span> [<span class="hljs-number">0</span>]      
    }    
<span class="hljs-comment">###########  START: Percentage CPU Metric Rules  ###########    </span>
  <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }            
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Percentage CPU"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>        
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">75</span>
      }
    }

  <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }        
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Percentage CPU"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>                
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">25</span>
      }
    }
<span class="hljs-comment">###########  END: Percentage CPU Metric Rules   ###########    </span>

<span class="hljs-comment">###########  START: Available Memory Bytes Metric Rules  ###########    </span>
  <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }            
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Available Memory Bytes"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>        
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">1073741824</span> <span class="hljs-comment"># Increase 1 VM when Memory In Bytes is less than 1GB</span>
      }
    }

  <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }        
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Available Memory Bytes"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>                
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">2147483648</span> <span class="hljs-comment"># Decrease 1 VM when Memory In Bytes is Greater than 2GB</span>
      }
    }
<span class="hljs-comment">###########  END: Available Memory Bytes Metric Rules  ###########  </span>

<span class="hljs-comment">###########  START: LB SYN Count Metric Rules - Just to Test scale-in, scale-out  ###########    </span>
  <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }      
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"SYNCount"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.lb_web.id</span> 
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Microsoft.Network/loadBalancers"</span>        
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">10</span> <span class="hljs-comment"># 10 requests to an LB</span>
      }
    }
  <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }      
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"SYNCount"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.lb_web.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Microsoft.Network/loadBalancers"</span>                
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">10</span>
      }
    }
<span class="hljs-comment">###########  END: LB SYN Count Metric Rules  ###########    </span>
} <span class="hljs-comment"># End of Profile-3</span>
</code></pre>
<ol>
<li><p><strong>Capacity Settings</strong></p>
<ul>
<li><p><strong>Default</strong>: 3 instances</p>
</li>
<li><p><strong>Minimum</strong>: 3 instances</p>
</li>
<li><p><strong>Maximum</strong>: 6 instances</p>
</li>
</ul>
</li>
<li><p><strong>Recurrence Settings</strong></p>
<ul>
<li><p><strong>Days</strong>: Saturday and Sunday</p>
</li>
<li><p><strong>Time</strong>: 00:00 hours (India Standard Time)</p>
</li>
</ul>
</li>
<li><p><strong>Scaling Rules</strong></p>
<ul>
<li><p><strong>CPU Utilization (Percentage CPU)</strong></p>
<ul>
<li><p><strong>Scale-Out</strong>: Increase by 1 instance when average CPU &gt; 75% over a 5-minute period.</p>
</li>
<li><p><strong>Scale-In</strong>: Decrease by 1 instance when average CPU &lt; 25% over a 5-minute period.</p>
</li>
</ul>
</li>
<li><p><strong>Memory Usage (Available Memory Bytes)</strong></p>
<ul>
<li><p><strong>Scale-Out</strong>: Increase by 1 instance when available memory is &lt; 1GB.</p>
</li>
<li><p><strong>Scale-In</strong>: Decrease by 1 instance when available memory &gt; 2GB.</p>
</li>
</ul>
</li>
<li><p><strong>Load Balancer Requests (SYNCount)</strong></p>
<ul>
<li><p><strong>Scale-Out</strong>: Increase by 1 instance when SYN requests exceed 10 over a 5-minute period.</p>
</li>
<li><p><strong>Scale-In</strong>: Decrease by 1 instance when SYN requests fall below 10 over a 5-minute period.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<p>This weekend-specific profile ensures that resources are efficiently managed with minimal instances during off-peak times, responding dynamically to weekend traffic demands through auto-scaling adjustments.</p>
<h3 id="heading-verifying-profile-3-on-azure-portal">Verifying Profile-3 on Azure Portal</h3>
<p>The <strong>Weekend Profile</strong> (Profile-3) has now been successfully applied, configured with specific scaling rules to operate exclusively on Saturdays and Sundays. This ensures that the auto-scaling behavior aligns with weekend traffic patterns, repeating every Saturday and Sunday as specified in the recurrence settings.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730020869354/ed653be4-053f-4e82-9da9-91cea798565c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730020748070/2f993bd6-86d4-428a-845e-de32fad452c0.png" alt class="image--center mx-auto" /></p>
<p>With Profile-3 in action, scaling adjustments will adhere to the defined metrics, keeping resources optimized during weekend periods.</p>
<h2 id="heading-profile-4-fixed-profile-specific-dates">Profile-4: Fixed Profile (Specific Dates)</h2>
<p>It is a targeted scaling profile that activates only on a specified date. It provides flexibility for scaling to meet anticipated needs for a specific day or event.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Profile-4: Fixed Profile for a Specific Day</span>
  <span class="hljs-string">profile</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"profile-4-fixed-profile"</span>
  <span class="hljs-comment"># Capacity Block     </span>
    <span class="hljs-string">capacity</span> {
      <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-number">5</span>
      <span class="hljs-string">minimum</span> <span class="hljs-string">=</span> <span class="hljs-number">5</span>
      <span class="hljs-string">maximum</span> <span class="hljs-string">=</span> <span class="hljs-number">20</span>
    }
  <span class="hljs-comment"># Fixed Block for a specific day</span>
    <span class="hljs-string">fixed_date</span> {
      <span class="hljs-string">timezone</span> <span class="hljs-string">=</span> <span class="hljs-string">"India Standard Time"</span>
      <span class="hljs-string">start</span>    <span class="hljs-string">=</span> <span class="hljs-string">"2021-08-16T00:00:00Z"</span>  <span class="hljs-comment"># CHANGE TO THE DATE YOU ARE TESTING</span>
      <span class="hljs-string">end</span>      <span class="hljs-string">=</span> <span class="hljs-string">"2021-08-16T23:59:59Z"</span>  <span class="hljs-comment"># CHANGE TO THE DATE YOU ARE TESTING</span>
    }  
<span class="hljs-comment">###########  START: Percentage CPU Metric Rules  ###########    </span>
  <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }            
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Percentage CPU"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>        
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">75</span>
      }
    }

  <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }        
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Percentage CPU"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>                
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">25</span>
      }
    }
<span class="hljs-comment">###########  END: Percentage CPU Metric Rules   ###########    </span>

<span class="hljs-comment">###########  START: Available Memory Bytes Metric Rules  ###########    </span>
  <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }            
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Available Memory Bytes"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>        
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">1073741824</span> <span class="hljs-comment"># Increase 1 VM when Memory In Bytes is less than 1GB</span>
      }
    }

  <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }        
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Available Memory Bytes"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine_scale_set.web_vmss.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"microsoft.compute/virtualmachinescalesets"</span>                
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">2147483648</span> <span class="hljs-comment"># Decrease 1 VM when Memory In Bytes is Greater than 2GB</span>
      }
    }
<span class="hljs-comment">###########  END: Available Memory Bytes Metric Rules  ###########  </span>


<span class="hljs-comment">###########  START: LB SYN Count Metric Rules - Just to Test scale-in, scale-out  ###########    </span>
  <span class="hljs-comment">## Scale-Out </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Increase"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }      
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"SYNCount"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.web_lb.id</span> 
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Microsoft.Network/loadBalancers"</span>        
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"GreaterThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">10</span> <span class="hljs-comment"># 10 requests to an LB</span>
      }
    }
  <span class="hljs-comment">## Scale-In </span>
    <span class="hljs-string">rule</span> {
      <span class="hljs-string">scale_action</span> {
        <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Decrease"</span>
        <span class="hljs-string">type</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ChangeCount"</span>
        <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-number">1</span>
        <span class="hljs-string">cooldown</span>  <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
      }      
      <span class="hljs-string">metric_trigger</span> {
        <span class="hljs-string">metric_name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"SYNCount"</span>
        <span class="hljs-string">metric_resource_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.web_lb.id</span>
        <span class="hljs-string">metric_namespace</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Microsoft.Network/loadBalancers"</span>                
        <span class="hljs-string">time_grain</span>         <span class="hljs-string">=</span> <span class="hljs-string">"PT1M"</span>
        <span class="hljs-string">statistic</span>          <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">time_window</span>        <span class="hljs-string">=</span> <span class="hljs-string">"PT5M"</span>
        <span class="hljs-string">time_aggregation</span>   <span class="hljs-string">=</span> <span class="hljs-string">"Average"</span>
        <span class="hljs-string">operator</span>           <span class="hljs-string">=</span> <span class="hljs-string">"LessThan"</span>
        <span class="hljs-string">threshold</span>          <span class="hljs-string">=</span> <span class="hljs-number">10</span>
      }
    }
<span class="hljs-comment">###########  END: LB SYN Count Metric Rules  ###########    </span>
} <span class="hljs-comment"># End of Profile-4</span>
</code></pre>
<ol>
<li><p><strong>Profile Name</strong>: "profile-4-fixed-profile"</p>
</li>
<li><p><strong>Capacity Block</strong>:</p>
<ul>
<li><p>Sets a default, minimum, and maximum capacity for the VM scale set:</p>
<ul>
<li><p>Default: 5 instances</p>
</li>
<li><p>Minimum: 5 instances</p>
</li>
<li><p>Maximum: 20 instances</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Fixed Date Block</strong>:</p>
<ul>
<li><p>Defines the profile's fixed timeframe:</p>
<ul>
<li><p>Time Zone: "India Standard Time"</p>
</li>
<li><p>Start: The specific start date and time for this profile.</p>
</li>
<li><p>End: The specific end date and time for this profile.</p>
</li>
</ul>
</li>
<li><p>Example: <code>start = "2021-08-16T00:00:00Z"</code> and <code>end = "2021-08-16T23:59:59Z"</code> to activate for a single day.</p>
</li>
</ul>
</li>
<li><p><strong>Percentage CPU Metric Rules</strong>:</p>
<ul>
<li><p>Defines conditions for scaling based on CPU usage:</p>
<ul>
<li><p><strong>Scale-Out Rule</strong>: Adds 1 VM if the average CPU utilization exceeds 75% over 5 minutes.</p>
</li>
<li><p><strong>Scale-In Rule</strong>: Removes 1 VM if the average CPU utilization is below 25% over 5 minutes.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Available Memory Bytes Metric Rules</strong>:</p>
<ul>
<li><p>Defines conditions for scaling based on available memory:</p>
<ul>
<li><p><strong>Scale-Out Rule</strong>: Adds 1 VM if available memory falls below 1GB.</p>
</li>
<li><p><strong>Scale-In Rule</strong>: Removes 1 VM if available memory is above 2GB.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Load Balancer SYN Count Metric Rules</strong>:</p>
<ul>
<li><p>Defines conditions for scaling based on the SYN count of requests to the Load Balancer:</p>
<ul>
<li><p><strong>Scale-Out Rule</strong>: Adds 1 VM if the SYN count exceeds 10.</p>
</li>
<li><p><strong>Scale-In Rule</strong>: Removes 1 VM if the SYN count is below 10.</p>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<p>Each of these metric-based rules is applied within the specified fixed date, ensuring that resources are scaled precisely as needed for the chosen timeframe.</p>
<h3 id="heading-verifying-profile-4-on-azure-portal">Verifying Profile 4 on Azure Portal</h3>
<p>The fixed-date profile, <strong>Profile-4</strong>, has now been updated and set to the current date. As it is configured for a specific day, it has taken precedence and is currently active, overriding the previous profiles.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730022102381/93fb5088-4bc4-463a-bfa5-c78b684f8cdd.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Deploying Azure Standard Load Balancer in Web-Tier via Terraform]]></title><description><![CDATA[In this article, we’ll configure an Azure Load Balancer in front of our web servers, eliminating the need for public IP addresses on the individual servers. Instead, the load balancer will be exposed to handle incoming traffic and distribute it acros...]]></description><link>https://www.devopswithritesh.in/deploying-azure-standard-load-balancer-in-web-tier-via-terraform</link><guid isPermaLink="true">https://www.devopswithritesh.in/deploying-azure-standard-load-balancer-in-web-tier-via-terraform</guid><category><![CDATA[Azure]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[TerraformwithAzure]]></category><category><![CDATA[Load Balancing]]></category><category><![CDATA[Load Balancer]]></category><category><![CDATA[Azure load Balancer]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[#90daysofdevops]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Thu, 17 Oct 2024 15:32:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729093125995/35ccf2da-a454-44bc-98d1-c979ffb8f567.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, we’ll configure an <strong><em>Azure Load Balancer</em></strong> in front of our web servers, eliminating the need for public IP addresses on the individual servers. Instead, the load balancer will be exposed to handle incoming traffic and distribute it across the web servers. This setup ensures efficient traffic management, improves server performance, and enhances overall security by keeping the web servers within a private network, while the Load Balancer serves as the entry point for all external requests.</p>
<h1 id="heading-azure-load-balancer">Azure Load Balancer</h1>
<p><strong>Azure Load Balancer</strong> is a fully managed service that distributes incoming network traffic across multiple virtual machines or services to ensure high availability and reliability. It operates at Layer 4 (Transport Layer) and supports both inbound and outbound scenarios, routing traffic based on IP and port.</p>
<h3 id="heading-why-its-important-in-front-of-your-web-servers"><strong>Why it's important in front of your web servers:</strong></h3>
<ul>
<li><p><strong>Traffic Distribution:</strong> It balances traffic across multiple web servers, preventing any single server from being overwhelmed.</p>
</li>
<li><p><strong>High Availability:</strong> If one server fails, the load balancer automatically redirects traffic to the healthy servers, ensuring continuous uptime.</p>
</li>
<li><p><strong>Scalability:</strong> It helps your infrastructure scale by evenly distributing workloads during peak traffic periods.</p>
</li>
<li><p><strong>Improved Performance:</strong> By spreading traffic across multiple servers, it optimizes resource utilization, ensuring faster response times for users.</p>
</li>
</ul>
<p>Using Azure Load Balancer in front of your web servers enhances the resilience and scalability of your applications, keeping them accessible and performing efficiently.</p>
<h2 id="heading-terraform-resources-to-be-used-for-azure-standard-load-balancer">Terraform Resources to be Used for Azure Standard Load Balancer</h2>
<p>To set up the <strong>Azure Standard Load Balancer</strong> and configure it to manage traffic efficiently across our web servers, we will use the following Terraform resources:</p>
<ul>
<li><p><code>azurerm_public_ip</code>: Creates a public IP address for the Load Balancer, allowing external traffic to be directed to it.</p>
</li>
<li><p><code>azurerm_lb</code>: Defines the Azure Load Balancer resource.</p>
</li>
<li><p><code>azurerm_lb_backend_address_pool</code>: Sets up the backend pool where the web servers (VMs) will be grouped for load balancing.</p>
</li>
<li><p><code>azurerm_lb_probe</code>: Configures health probes to monitor the availability of the web servers.</p>
</li>
<li><p><code>azurerm_lb_rule</code>: Establishes the load balancing rules for traffic distribution across the backend servers.</p>
</li>
<li><p><code>azurerm_network_interface_backend_address_pool_association</code>: Associates the network interfaces of the web servers with the Load Balancer’s backend pool.</p>
</li>
</ul>
<p>These resources will ensure seamless traffic distribution, increased availability, and optimized performance of our web servers.</p>
<h1 id="heading-provision-standard-load-balancer">Provision Standard Load Balancer</h1>
<h2 id="heading-step-1-create-a-public-ip-address-for-the-azure-load-balancer"><strong>Step 1: Create a Public IP Address for the Azure Load Balancer</strong></h2>
<p>The first step in setting up the Azure Load Balancer is to create a <strong>Public IP</strong>. This IP will expose the Load Balancer to the internet, allowing external traffic to reach your web servers. It acts as the entry point for the traffic coming from the users and directs it to the Load Balancer for further distribution.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_public_ip"</span> <span class="hljs-string">"lb_web_publicip"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-web-lb-publicip"</span>
    <span class="hljs-string">allocation_method</span> <span class="hljs-string">=</span> <span class="hljs-string">"Static"</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">sku</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard"</span>
    <span class="hljs-string">tags</span> <span class="hljs-string">=</span> <span class="hljs-string">local.common_tags</span>

}
</code></pre>
<ul>
<li><p><strong>Resource Block</strong>:<br />  The resource block defines the Azure public IP using the <code>azurerm_public_ip</code> resource.</p>
</li>
<li><p><strong>Resource Name</strong>:<br />  The name of the public IP resource is defined as <code>"${local.resource_name_prefix}-web-lb-publicip"</code>, which dynamically combines the prefix stored in <code>local.resource_name_prefix</code> with the suffix <code>-web-lb-publicip</code> for proper naming convention.</p>
</li>
<li><p><strong>Allocation Method</strong>:<br />  The <code>allocation_method</code> is set to <code>"Static"</code>, meaning the IP address assigned will not change and remains fixed.</p>
</li>
<li><p><strong>Location</strong>:<br />  The <code>location</code> parameter specifies the region where the public IP is being created. It references the location of the resource group (<code>azurerm_resource_group.rg.location</code>).</p>
</li>
<li><p><strong>Resource Group</strong>:<br />  The public IP is associated with the resource group, as indicated by <code>resource_group_name = azurerm_resource_</code><a target="_blank" href="http://group.rg.name"><code>group.rg.name</code></a>, ensuring it is created within the specified resource group.</p>
</li>
<li><p><strong>SKU</strong>:<br />  The SKU is set to <code>"Standard"</code>, which provides high availability, resiliency and supports multiple zones, suitable for production workloads.</p>
</li>
<li><p><strong>Tags</strong>:<br />  Tags are added to the public IP resource using <code>tags = local.common_tags</code>. These tags allow for better organization and management of resources across Azure.</p>
</li>
</ul>
<h2 id="heading-step-2-create-azure-standard-load-balancer">Step 2: <strong>Create Azure Standard Load Balancer</strong></h2>
<p>The <strong>Standard Load Balancer</strong> is a highly available, secure, and scalable service that balances traffic between the web servers in your backend pool. It distributes incoming traffic based on a set of rules and health probes, ensuring that the load is equally distributed across the available instances and preventing any server from being overwhelmed.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_lb"</span> <span class="hljs-string">"lb_web"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-lb-web"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">sku</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard"</span>
    <span class="hljs-comment"># We can add multiple IP configuration block and allocate multiple public IP to a single  load balancer</span>
    <span class="hljs-string">frontend_ip_configuration</span> {
      <span class="hljs-string">public_ip_address_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_public_ip.lb_web_publicip.id</span>
      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"lb_web_ip_config-1"</span>
    }
}
</code></pre>
<ul>
<li><p><strong>Resource Block</strong>:<br />  The resource block is defined using the <code>azurerm_lb</code> resource to create a <strong>Load Balancer</strong> in Azure.</p>
</li>
<li><p><strong>Resource Name</strong>:<br />  The load balancer is named as <code>"${local.resource_name_prefix}-lb-web"</code>, dynamically concatenating the <code>local.resource_name_prefix</code> with <code>-lb-web</code> for consistent naming.</p>
</li>
<li><p><strong>Resource Group</strong>:<br />  The load balancer is associated with a specific resource group, specified using <code>resource_group_name = azurerm_resource_</code><a target="_blank" href="http://group.rg.name"><code>group.rg.name</code></a>.</p>
</li>
<li><p><strong>Location</strong>:<br />  The <code>location</code> specifies where the load balancer is being deployed, referencing the same region as the resource group using <code>azurerm_resource_group.rg.location</code>.</p>
</li>
<li><p><strong>SKU</strong>:<br />  The <code>sku</code> is set to <code>"Standard"</code>, which provides advanced features like zone redundancy and supports high-availability scenarios.</p>
</li>
<li><p><strong>Frontend IP Configuration</strong>:</p>
<ul>
<li><p><strong>Public IP Assignment</strong>: The <code>frontend_ip_configuration</code> block specifies that the public IP address (<code>azurerm_public_</code><a target="_blank" href="http://ip.lb"><code>ip.lb</code></a><code>_web_</code><a target="_blank" href="http://publicip.id"><code>publicip.id</code></a>) will be assigned to the load balancer. This is used to expose the load balancer to the internet.</p>
</li>
<li><p><strong>Name</strong>: The configuration is named <code>"lb_web_ip_config-1"</code>, allowing for multiple frontend configurations, which can be used to assign multiple public IPs if needed.</p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-step-3-create-backend-pool">Step 3: Create Backend Pool</h2>
<p>The <strong>Backend Address Pool</strong> is a group of virtual machines or web servers that receive the traffic distributed by the Load Balancer. All the web servers in this pool are considered for load balancing, and traffic is directed based on availability and health. This step is essential to ensure that multiple web servers can share the incoming traffic.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_lb_backend_address_pool"</span> <span class="hljs-string">"lb_backend_address_pool"</span> {
  <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"lb-web-backend"</span>
  <span class="hljs-string">loadbalancer_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.lb_web.id</span>
}
</code></pre>
<ul>
<li><p><strong>Resource Block</strong>:<br />  The resource block is defined using the <code>azurerm_lb_backend_address_pool</code> resource to create a <strong>Backend Address Pool</strong> for the load balancer.</p>
</li>
<li><p><strong>Resource Name</strong>:<br />  The backend pool is named <code>"lb-web-backend"</code>, indicating its role as the backend pool for the web servers managed by the load balancer.</p>
</li>
<li><p><strong>Load Balancer Association</strong>:<br />  The <code>loadbalancer_id</code> references the load balancer created in the previous step by using <code>azurerm_</code><a target="_blank" href="http://lb.lb"><code>lb.lb</code></a><code>_</code><a target="_blank" href="http://web.id"><code>web.id</code></a>. This links the backend pool to the specific load balancer.</p>
</li>
<li><p><strong>Backend Pool Purpose</strong>:</p>
<ul>
<li><p>This backend pool will group the network interfaces (NICs) of the web servers that are part of the load balancing setup.</p>
</li>
<li><p>It acts as the target pool where the load balancer will distribute traffic, ensuring that the load is shared across all the backend VMs.</p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-step-4-create-load-balancer-probe">Step 4: Create Load Balancer Probe</h2>
<p>A <strong>Load Balancer Probe</strong> continuously checks the health of the web servers in the backend pool. It sends periodic requests to the servers, and if a server fails to respond, it is temporarily removed from the pool until it becomes healthy again. This ensures that traffic is only sent to healthy servers, enhancing the availability and reliability of the application.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_lb_probe"</span> <span class="hljs-string">"lb_web_probe"</span> {
  <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"tcp_probe"</span>
  <span class="hljs-string">loadbalancer_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.lb_web.id</span>
  <span class="hljs-string">port</span> <span class="hljs-string">=</span> <span class="hljs-number">80</span>
  <span class="hljs-string">protocol</span> <span class="hljs-string">=</span> <span class="hljs-string">"Tcp"</span> 
}
</code></pre>
<ul>
<li><p><strong>Resource Block</strong>:<br />  The resource block uses the <code>azurerm_lb_probe</code> resource to define a <strong>Health Probe</strong> for the Azure Load Balancer.</p>
</li>
<li><p><strong>Resource Name</strong>:</p>
<ul>
<li><code>name = "tcp_probe"</code>: The probe is named <code>tcp_probe</code>, indicating that it will monitor the health of backend servers using the TCP protocol.</li>
</ul>
</li>
<li><p><strong>Load Balancer Association</strong>:</p>
<ul>
<li><code>loadbalancer_id = azurerm_</code><a target="_blank" href="http://lb.lb"><code>lb.lb</code></a><code>_</code><a target="_blank" href="http://web.id"><code>web.id</code></a>: Links the health probe to the specified <strong>Azure Load Balancer</strong>, ensuring that it monitors the backend VMs managed by this load balancer.</li>
</ul>
</li>
<li><p><strong>Port Configuration</strong>:</p>
<ul>
<li><code>port = 80</code>: The probe checks the health of the backend servers by sending a request to <strong>port 80</strong> (typically used for HTTP traffic) on each backend VM.</li>
</ul>
</li>
<li><p><strong>Protocol</strong>:</p>
<ul>
<li><code>protocol = "Tcp"</code>: The probe uses the <strong>TCP protocol</strong> to verify if the backend VMs are responsive. This means the probe checks if the VM is accepting TCP connections on port 80.</li>
</ul>
</li>
</ul>
<h2 id="heading-step-5-create-load-balancer-rule">Step 5: Create Load Balancer Rule</h2>
<p><strong>Load Balancer Rules</strong> define how traffic from the public IP should be distributed across the backend pool. You can set rules based on protocol (TCP, HTTP, etc.), port number, and session persistence to control how the load balancer routes the traffic to different web servers. This helps in routing traffic based on specific configurations.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_lb_rule"</span> <span class="hljs-string">"lb_rule_app1"</span> {
  <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"lb-web-rule-app1"</span>
  <span class="hljs-string">backend_port</span> <span class="hljs-string">=</span> <span class="hljs-number">80</span>
  <span class="hljs-string">frontend_port</span> <span class="hljs-string">=</span> <span class="hljs-number">80</span>
  <span class="hljs-string">loadbalancer_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.lb_web.id</span> 
  <span class="hljs-string">protocol</span> <span class="hljs-string">=</span> <span class="hljs-string">"Tcp"</span>
  <span class="hljs-string">frontend_ip_configuration_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.lb_web.frontend_ip_configuration</span>[<span class="hljs-number">0</span>]<span class="hljs-string">.name</span>
  <span class="hljs-string">backend_address_pool_ids</span> <span class="hljs-string">=</span> [ <span class="hljs-string">azurerm_lb_backend_address_pool.lb_backend_address_pool.id</span> ]
  <span class="hljs-string">probe_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb_probe.lb_web_probe.id</span>
}
</code></pre>
<ul>
<li><p><strong>Resource Block</strong>:<br />  The resource block uses the <code>azurerm_lb_rule</code> resource to define a <strong>Load Balancer Rule</strong> for distributing traffic to the web application.</p>
</li>
<li><p><strong>Resource Name</strong>:<br />  The rule is named <code>"lb-web-rule-app1"</code>, which defines the load balancing behavior specifically for traffic to the web application <code>app1</code>.</p>
</li>
<li><p><strong>Port Configuration</strong>:</p>
<ul>
<li><p><code>backend_port = 80</code>: Specifies the port on the backend servers (web VMs) where the traffic will be directed.</p>
</li>
<li><p><code>frontend_port = 80</code>: Defines the port on the load balancer's frontend IP where traffic will arrive.</p>
</li>
<li><p>Both ports are set to <code>80</code>, indicating that HTTP traffic is being routed from clients to the web VMs.</p>
</li>
</ul>
</li>
<li><p><strong>Load Balancer Association</strong>:</p>
<ul>
<li><code>loadbalancer_id = azurerm_</code><a target="_blank" href="http://lb.lb"><code>lb.lb</code></a><code>_</code><a target="_blank" href="http://web.id"><code>web.id</code></a>: Links the rule to the specific Azure Standard Load Balancer defined in the previous steps.</li>
</ul>
</li>
<li><p><strong>Protocol</strong>:</p>
<ul>
<li><code>protocol = "Tcp"</code>: Specifies the protocol as TCP, which is typical for HTTP traffic.</li>
</ul>
</li>
<li><p><strong>Frontend IP Configuration</strong>:</p>
<ul>
<li><code>frontend_ip_configuration_name = azurerm_</code><a target="_blank" href="http://lb.lb"><code>lb.lb</code></a><code>_web.frontend_ip_configuration[0].name</code>: Refers to the load balancer's frontend IP configuration that was set up earlier, linking this rule to the public IP of the load balancer.</li>
</ul>
</li>
<li><p><strong>Backend Address Pool</strong>:</p>
<ul>
<li><code>backend_address_pool_ids = [ azurerm_lb_backend_address_</code><a target="_blank" href="http://pool.lb"><code>pool.lb</code></a><code>_backend_address_</code><a target="_blank" href="http://pool.id"><code>pool.id</code></a> <code>]</code>: Connects the rule to the backend pool that contains the web servers, ensuring traffic is directed to the appropriate VMs.</li>
</ul>
</li>
<li><p><strong>Health Probe</strong>:</p>
<ul>
<li><code>probe_id = azurerm_lb_</code><a target="_blank" href="http://probe.lb"><code>probe.lb</code></a><code>_web_</code><a target="_blank" href="http://probe.id"><code>probe.id</code></a>: Associates the rule with a health probe to monitor the availability of backend VMs. This ensures that only healthy VMs receive traffic.</li>
</ul>
</li>
</ul>
<h2 id="heading-step-6-associate-network-interface-and-standard-load-balancer">Step 6: Associate Network Interface and Standard Load Balancer</h2>
<p>In this step, the <strong>Network Interface</strong> of each web server is associated with the Load Balancer’s backend pool. This allows the Load Balancer to direct traffic to the web servers via their network interfaces. Each server's network interface is linked to the backend pool so that it can participate in load balancing, ensuring smooth and efficient traffic distribution.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_interface_backend_address_pool_association"</span> <span class="hljs-string">"lb_web_nic_backendpool_association"</span> {
    <span class="hljs-string">network_interface_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_interface.web_linuxvm_NIC.id</span>
    <span class="hljs-string">backend_address_pool_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb_backend_address_pool.lb_backend_address_pool.id</span>
    <span class="hljs-string">ip_configuration_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_interface.web_linuxvm_NIC.ip_configuration</span>[<span class="hljs-number">0</span>]<span class="hljs-string">.name</span>
}
</code></pre>
<ul>
<li><p><strong>Resource Block</strong>:<br />  The resource block uses <code>azurerm_network_interface_backend_address_pool_association</code> to associate a <strong>Network Interface</strong> (NIC) of a VM with a Load Balancer's backend pool.</p>
</li>
<li><p><strong>Network Interface Association</strong>:</p>
<ul>
<li><code>network_interface_id = azurerm_network_interface.web_linuxvm_</code><a target="_blank" href="http://NIC.id"><code>NIC.id</code></a>: Associates the <strong>Network Interface (NIC)</strong> of the VM (in this case, <code>web_linuxvm_NIC</code>) with the backend pool. The <strong>NIC</strong> is responsible for managing network connectivity for the VM.</li>
</ul>
</li>
<li><p><strong>Backend Pool Association</strong>:</p>
<ul>
<li><code>backend_address_pool_id = azurerm_lb_backend_address_</code><a target="_blank" href="http://pool.lb"><code>pool.lb</code></a><code>_backend_address_</code><a target="_blank" href="http://pool.id"><code>pool.id</code></a>: Associates the NIC with the <strong>Load Balancer's Backend Pool</strong>. This ensures that traffic directed to the load balancer is forwarded to this VM as part of the backend pool.</li>
</ul>
</li>
<li><p><strong>IP Configuration</strong>:</p>
<ul>
<li><code>ip_configuration_name = azurerm_network_interface.web_linuxvm_NIC.ip_configuration[0].name</code>: Specifies the <strong>IP configuration</strong> used by the network interface. This ensures that the correct IP settings for the NIC are used when associating it with the load balancer.</li>
</ul>
</li>
</ul>
<h2 id="heading-outputs">Outputs</h2>
<pre><code class="lang-yaml"><span class="hljs-string">output</span> <span class="hljs-string">"web_lb_public_ip"</span> {
  <span class="hljs-string">value</span>       <span class="hljs-string">=</span> <span class="hljs-string">azurerm_public_ip.lb_web_publicip.ip_address</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Public ip of load balancer"</span>

}
<span class="hljs-string">output</span> <span class="hljs-string">"web_lb_id"</span> {
  <span class="hljs-string">value</span>       <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.lb_web.id</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Load Balancer Id"</span>

}
<span class="hljs-string">output</span> <span class="hljs-string">"web_lb_frontend_ip_configuration"</span> {
  <span class="hljs-string">value</span>       <span class="hljs-string">=</span> [<span class="hljs-string">azurerm_lb.lb_web.frontend_ip_configuration</span>]
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Web LB Frontend IP configuration"</span>

}
</code></pre>
<h1 id="heading-verification-on-azure-portal">Verification on Azure Portal</h1>
<p>Once the Terraform configuration is successfully applied, we will proceed to validate the resources on the Azure Portal. This step ensures that all the components, such as the public IP, load balancer, backend pool, probes, and network interface associations, have been deployed as expected. We will verify that:</p>
<ol>
<li><p><strong>Public IP</strong>: The static public IP for the Load Balancer is created and associated with the frontend configuration.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729174569108/efc576f7-b58a-414b-8a83-8f22545282bb.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Load Balancer</strong>: The Azure Standard Load Balancer is visible with its frontend and backend configurations.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729174841161/42b370d6-c850-4cae-aee8-e146fc47fe96.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Backend Pool</strong>: The VMs or network interfaces are correctly associated with the load balancer’s backend pool.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729175240757/ed3e24e4-18db-4d62-846f-399243f0541e.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729175312606/bc340fca-2f36-4770-87cb-7feb8be52323.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Load Balancer Probes</strong>: The health probe is configured to monitor the health of the VMs via the specified protocol and port.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729175480720/5b58e87a-1a8a-4e05-bee9-3b45282e1b01.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Load Balancer Rules</strong>: Traffic routing rules are set up correctly to forward traffic to the backend pool.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729175611369/6c64f5c3-b702-4964-92c9-4494734241d5.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Network Interface</strong>: Ensure that the NICs are associated with the load balancer’s backend pool, allowing traffic to flow to the VMs.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729176261684/b80ce8c2-2154-433e-9bc5-f82860be9e6a.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p>And we are able to access the static web page using the Load Balancer’s public IP as shown below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729176458320/aa741c2c-8899-4dda-92a3-90e8b709c4c1.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-inbound-nat-rule">Inbound NAT Rule</h1>
<p>An <strong>Inbound NAT rule</strong> in Azure Load Balancer allows you to direct specific inbound traffic from a public IP to a particular virtual machine (VM) or a specific port on a VM in the backend pool. This enables external users to connect to individual VMs through unique ports without exposing multiple public IPs for each VM.</p>
<h2 id="heading-terraform-resources-to-be-used">Terraform Resources to be Used</h2>
<p>To configure a <strong>NAT rule</strong> for an <strong>Azure Standard Load Balancer</strong>, we will use two Terraform resources:</p>
<ol>
<li><p><code>azurerm_lb_nat_rule</code> – This resource defines the NAT rule that will map external traffic from a specific port to an internal port on a VM in the backend pool.</p>
</li>
<li><p><code>azurerm_network_interface_nat_rule_association</code> – This resource associates the network interface of a virtual machine with the NAT rule, allowing the rule to forward traffic to the correct VM and port.</p>
</li>
</ol>
<h2 id="heading-create-nat-rule">Create NAT Rule</h2>
<p>This resource creates a <strong>NAT rule</strong> for an Azure Standard Load Balancer to allow SSH access to a backend VM by mapping an external port to the internal SSH port (22).</p>
<p>This configuration creates a NAT rule to allow external SSH access to a VM using the Load Balancer. When users access port <strong>1022</strong> on the Load Balancer’s public IP, it will forward the traffic to port <strong>22</strong> (SSH) on the backend VM.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_lb_nat_rule"</span> <span class="hljs-string">"lb_web_inbound_nat_rule_22"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"ssh-1022-vm-22"</span>
    <span class="hljs-string">backend_port</span> <span class="hljs-string">=</span> <span class="hljs-number">22</span>   <span class="hljs-comment"># LoadBalancer port which will be mapped to port 22 of VM</span>
    <span class="hljs-string">frontend_port</span> <span class="hljs-string">=</span> <span class="hljs-number">1022</span>
    <span class="hljs-string">frontend_ip_configuration_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.lb_web.frontend_ip_configuration</span>[<span class="hljs-number">0</span>]<span class="hljs-string">.name</span>
    <span class="hljs-string">loadbalancer_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb.lb_web.id</span>
    <span class="hljs-string">protocol</span> <span class="hljs-string">=</span> <span class="hljs-string">"Tcp"</span>
   <span class="hljs-comment"># backend_address_pool_id = azurerm_lb_backend_address_pool.lb_backend_address_pool.id</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
}
</code></pre>
<ul>
<li><p><strong>Resource Name</strong>:</p>
<ul>
<li><p><code>azurerm_lb_nat_</code><a target="_blank" href="http://rule.lb"><code>rule.lb</code></a><code>_web_inbound_nat_rule_22</code></p>
</li>
<li><p>This resource defines an inbound NAT rule for the load balancer.</p>
</li>
</ul>
</li>
<li><p><strong>NAT Rule Name</strong>:</p>
<ul>
<li>The name of the NAT rule is <code>"ssh-1022-vm-22"</code>. It maps the frontend port (1022) to the backend VM’s SSH port (22).</li>
</ul>
</li>
<li><p><strong>Backend Port</strong>:</p>
<ul>
<li><code>backend_port = 22</code>: This is the port on the VM that the load balancer will forward traffic to (SSH port on the VM).</li>
</ul>
</li>
<li><p><strong>Frontend Port</strong>:</p>
<ul>
<li><code>frontend_port = 1022</code>: The external port on the load balancer that will be exposed to users. It maps to the backend port (22) of the VM.</li>
</ul>
</li>
<li><p><strong>Frontend IP Configuration</strong>:</p>
<ul>
<li><code>frontend_ip_configuration_name</code>: Specifies the frontend IP configuration of the load balancer that will handle this traffic.</li>
</ul>
</li>
<li><p><strong>Load Balancer Association</strong>:</p>
<ul>
<li><code>loadbalancer_id = azurerm_</code><a target="_blank" href="http://lb.lb"><code>lb.lb</code></a><code>_</code><a target="_blank" href="http://web.id"><code>web.id</code></a>: Associates the NAT rule with the load balancer defined in the same configuration.</li>
</ul>
</li>
<li><p><strong>Protocol</strong>:</p>
<ul>
<li><code>protocol = "Tcp"</code>: Specifies that the protocol for the NAT rule is TCP, which is required for SSH connections.</li>
</ul>
</li>
<li><p><strong>Resource Group</strong>:</p>
<ul>
<li><code>resource_group_name = azurerm_resource_</code><a target="_blank" href="http://group.rg.name"><code>group.rg.name</code></a>: Indicates the resource group where the NAT rule is created.</li>
</ul>
</li>
</ul>
<h2 id="heading-associate-network-interface-with-nat-rule">Associate Network Interface with NAT Rule</h2>
<p>This resource associates the <strong>NAT rule</strong> created for the load balancer with the network interface of the backend VM. This allows traffic from the load balancer's NAT rule to be routed to the specified VM through its network interface.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_interface_nat_rule_association"</span> <span class="hljs-string">"lb_nic_nat_rule_associate"</span> {
 <span class="hljs-string">nat_rule_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_lb_nat_rule.lb_web_inbound_nat_rule_22.id</span>
 <span class="hljs-string">network_interface_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_interface.web_linuxvm_NIC.id</span>
 <span class="hljs-string">ip_configuration_name</span> <span class="hljs-string">=</span>  <span class="hljs-string">azurerm_network_interface.web_linuxvm_NIC.ip_configuration</span>[<span class="hljs-number">0</span>]<span class="hljs-string">.name</span>

}
</code></pre>
<ul>
<li><p><strong>Resource Name</strong>:</p>
<ul>
<li><p><code>azurerm_network_interface_nat_rule_</code><a target="_blank" href="http://association.lb"><code>association.lb</code></a><code>_nic_nat_rule_associate</code></p>
</li>
<li><p>This resource creates an association between a NAT rule and the network interface of a VM.</p>
</li>
</ul>
</li>
<li><p><strong>NAT Rule ID</strong>:</p>
<ul>
<li><code>nat_rule_id = azurerm_lb_nat_</code><a target="_blank" href="http://rule.lb"><code>rule.lb</code></a><code>_web_inbound_nat_rule_</code><a target="_blank" href="http://22.id"><code>22.id</code></a>: Refers to the NAT rule (<code>lb_web_inbound_nat_rule_22</code>) which forwards traffic from port 1022 on the load balancer to port 22 on the VM. This ID connects the rule to the network interface.</li>
</ul>
</li>
<li><p><strong>Network Interface ID</strong>:</p>
<ul>
<li><code>network_interface_id = azurerm_network_interface.web_linuxvm_</code><a target="_blank" href="http://NIC.id"><code>NIC.id</code></a>: Specifies the network interface of the backend VM (<code>web_linuxvm_NIC</code>) where the NAT rule will be applied. This ensures traffic is routed to the correct VM.</li>
</ul>
</li>
<li><p><strong>IP Configuration Name</strong>:</p>
<ul>
<li><code>ip_configuration_name = azurerm_network_interface.web_linuxvm_NIC.ip_configuration[0].name</code>: Specifies the IP configuration of the network interface. This defines which IP on the network interface is associated with the NAT rule.</li>
</ul>
</li>
</ul>
<p>This configuration links the <strong>NAT rule</strong> to the network interface of the backend VM. By doing so, it ensures that the traffic routed through the <strong>NAT rule</strong> on the <strong>Load Balancer</strong> is directed to the VM via its network interface.</p>
<h1 id="heading-nat-rule-verification-on-azure-portal">NAT Rule Verification on Azure Portal</h1>
<p>After applying the NAT rule changes the respective NAT rule has been attached to the Load Balancer as follows</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729178287116/ac042a7c-319c-40d9-892c-5a55f0920f48.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729178484084/0f5f4f18-9b45-47da-98e5-6f544a87eafd.png" alt class="image--center mx-auto" /></p>
<p>And now we are able to login to the VM via <strong>Load Balancer Public IP</strong> instead of VM’s actual IP.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729178814728/24be41d5-326a-45e3-8aa3-5fd735e7fb9d.png" alt class="image--center mx-auto" /></p>
<p>The highlighted IP i.e. <strong>20.231.29.164 (hr-dev-web-lb-publicip)</strong> is the Load Balancer public IP</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729178882798/580713a8-fdf7-43b7-b56d-4d1351b225fc.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Deployment of Azure Bastion Host and Service with Terraform for Secure VM Access]]></title><description><![CDATA[A Bastion Host or Bastion Service is essential for securing remote access to your virtual machines (VMs) in the cloud without exposing them to the public internet. Traditionally, accessing VMs required public IPs, increasing the risk of attacks from ...]]></description><link>https://www.devopswithritesh.in/deployment-of-azure-bastion-host-and-service-with-terraform-for-secure-vm-access</link><guid isPermaLink="true">https://www.devopswithritesh.in/deployment-of-azure-bastion-host-and-service-with-terraform-for-secure-vm-access</guid><category><![CDATA[Azure]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[infrastructure]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[TerraformwithAzure]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#90daysofdevops]]></category><category><![CDATA[cloudautomation]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Sun, 13 Oct 2024 11:16:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728817859641/c53af12d-e276-4e27-85d9-e357de7dd52e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A Bastion Host or Bastion Service is essential for securing remote access to your virtual machines (VMs) in the cloud without exposing them to the public internet. Traditionally, accessing VMs required public IPs, increasing the risk of attacks from malicious actors. With a bastion host, remote desktop (RDP) and SSH connections are securely routed over a private connection, reducing the attack surface.</p>
<p>In our previous article, we exposed the public IP of the virtual machine and established an SSH connection directly using that public IP. While this method works, it introduces potential security risks by exposing the VM to the public internet. To refine our approach and enhance the security of our infrastructure, we will now eliminate the need to attach a public IP. Instead, we’ll leverage Azure Bastion Host and Bastion Service to securely access the VM without exposing it to the internet, ensuring a more secure and controlled environment for remote access.</p>
<h1 id="heading-bastion-host-vs-azure-bastion-service">Bastion Host vs Azure Bastion Service</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728610264952/ad3c7b56-eb58-418f-86e8-db7992f0d75c.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-azure-bastion-service">Azure Bastion Service</h2>
<p>Azure Bastion eliminates the need to set up VPNs, jump servers, or additional security layers by providing a managed, browser-based RDP and SSH connection over SSL. It enables administrators to manage their VMs without needing a public IP address, making the network more secure and less vulnerable to threats. This approach simplifies secure access while maintaining a robust security posture.</p>
<h2 id="heading-bastion-host">Bastion Host</h2>
<p>Setting up a Bastion Host follows a traditional approach where a dedicated virtual machine is deployed in an isolated subnet, separate from the main workload VMs. This Bastion Host has a public IP exposed to the user, enabling secure access to this isolated VM. From the Bastion Host, the user can then connect to the internal workload VMs, ensuring secure connectivity. Once the Bastion Host is configured, the public IP of the main resources is removed, minimizing exposure to the public internet while maintaining secure, internal access to the VMs.</p>
<h1 id="heading-resources-to-be-used-in-terraform">Resources to be Used in Terraform</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728610425398/d1400a76-40e5-4114-840b-26b09b572687.png" alt class="image--center mx-auto" /></p>
<p>In this setup, we’ll utilize several Terraform resources to implement a secure Bastion Host to connect to our infrastructure. The following resources will be used to achieve this:</p>
<ul>
<li><p><strong>azurerm_public_ip</strong>: To allocate a public IP for the Bastion Host.</p>
</li>
<li><p><strong>azurerm_bastion_host</strong>: To provision the Bastion service for secure, browser-based access to VMs.</p>
</li>
<li><p><strong>azurerm_network_interface</strong>: To configure network connectivity for the Bastion Host.</p>
</li>
<li><p><strong>azurerm_linux_virtual_machine</strong>: For deploying the Bastion Host VM, which also creates Azure-managed disks.</p>
</li>
<li><p><strong>azurerm_network_security_group</strong> and <strong>azurerm_network_security_rule</strong>: To manage security rules that control inbound and outbound traffic for the Bastion Host.</p>
</li>
<li><p><strong>azurerm_network_interface_security_group_association</strong>: To associate the security group with the network interface.</p>
</li>
<li><p><strong>azurerm_subnet</strong>: To configure the subnet where the Bastion Host resides, ensuring it is isolated for enhanced security.</p>
</li>
</ul>
<p>This resource configuration ensures a secure setup for managing access to the main workload VMs.</p>
<h1 id="heading-deploy-linux-bastion-host-traditional-approach">Deploy Linux Bastion Host (<em>Traditional Approach</em>)</h1>
<p>In this section, we will deploy a Linux VM in the previously created Bastion subnet, which will serve as a <strong>Bastion server</strong> or <strong>jump server</strong>. This server will act as a secure intermediary, allowing us to establish a safe connection to the workload VM located in the Web-Tier.</p>
<ol>
<li><p><strong>Provisioning a Public IP for the Bastion Host</strong><br /> The first step involves creating a <strong>static public IP</strong> for the Bastion Host using the <code>azurerm_public_ip</code> resource. This ensures that the Bastion Host has a persistent public IP, making it accessible externally. The IP is allocated with the "Standard" SKU for enhanced availability.</p>
</li>
<li><p><strong>Creating a Network Interface for the Bastion Host</strong><br /> Next, we create a <strong>Network Interface</strong> using <code>azurerm_network_interface</code>, which binds the public IP to the Bastion Host and connects it to the previously configured Bastion subnet. The private IP is dynamically allocated within this subnet, providing internal network connectivity.</p>
</li>
<li><p><strong>Deploying the Linux Virtual Machine as the Bastion Host</strong><br /> The <strong>Bastion Host Linux VM</strong> is provisioned using <code>azurerm_linux_virtual_machine</code>. Key configurations include specifying the admin username, SSH key-based password-less authentication, VM size, and network interface association. Additionally, the OS disk settings and RedHat image are defined, ensuring that the Bastion Host is ready to securely handle remote access requests.</p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-comment"># In some regions bastion azure native bastion service is not available or not conveinient to use. In those cases we can provision our own bastion host</span>

<span class="hljs-comment"># 1- Public IP for Linux Bastion Host VM</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_public_ip"</span> <span class="hljs-string">"bastion_host_publicip"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-${var.linux_bastionhost_publicip}"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">allocation_method</span> <span class="hljs-string">=</span> <span class="hljs-string">"Static"</span>
    <span class="hljs-string">sku</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard"</span>
}

<span class="hljs-comment"># 2- Network Interface </span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_interface"</span> <span class="hljs-string">"linux_bastionhost_NIC"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-${var.linux_bastionhost_nic_name}"</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">ip_configuration</span> {
      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"bastionhost_ip_1"</span>
      <span class="hljs-string">subnet_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.bastion_subnet.id</span>
      <span class="hljs-string">private_ip_address_allocation</span> <span class="hljs-string">=</span> <span class="hljs-string">"Dynamic"</span>
      <span class="hljs-string">public_ip_address_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_public_ip.bastion_host_publicip.id</span>
    }

}

<span class="hljs-comment"># 3- Create Bastion Host Linux Virtual Machine</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_linux_virtual_machine"</span> <span class="hljs-string">"bastion_host_linuxvm"</span> {

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-${var.bastionhost_vm_name}"</span>
    <span class="hljs-string">computer_name</span> <span class="hljs-string">=</span> <span class="hljs-string">var.bastionhost_vm_hostname</span>
    <span class="hljs-string">admin_username</span> <span class="hljs-string">=</span> <span class="hljs-string">var.linux_admin_username</span>
    <span class="hljs-string">size</span> <span class="hljs-string">=</span> <span class="hljs-string">var.vm_size</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">disable_password_authentication</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>

    <span class="hljs-string">network_interface_ids</span> <span class="hljs-string">=</span>[
        <span class="hljs-string">azurerm_network_interface.linux_bastionhost_NIC.id</span>
    ]

    <span class="hljs-string">admin_ssh_key</span> {
      <span class="hljs-string">username</span> <span class="hljs-string">=</span> <span class="hljs-string">var.linux_admin_username</span>
      <span class="hljs-string">public_key</span> <span class="hljs-string">=</span> <span class="hljs-string">file("$</span>{<span class="hljs-string">path.module</span>}<span class="hljs-string">/ssh-keys/terraform-azure.pub")</span>
    }
    <span class="hljs-string">os_disk</span> {
      <span class="hljs-string">caching</span> <span class="hljs-string">=</span> <span class="hljs-string">"ReadWrite"</span>
      <span class="hljs-string">storage_account_type</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard_LRS"</span>
    }

    <span class="hljs-string">source_image_reference</span> {
      <span class="hljs-string">publisher</span> <span class="hljs-string">=</span> <span class="hljs-string">"RedHat"</span>
      <span class="hljs-string">offer</span> <span class="hljs-string">=</span> <span class="hljs-string">"RHEL"</span>
      <span class="hljs-string">sku</span> <span class="hljs-string">=</span> <span class="hljs-string">"83-gen2"</span>
      <span class="hljs-string">version</span> <span class="hljs-string">=</span> <span class="hljs-string">"latest"</span>
    }   

}
</code></pre>
<h2 id="heading-input-variables">Input Variables</h2>
<p>The input variables below are being referenced in the above manifest</p>
<pre><code class="lang-yaml"><span class="hljs-string">variable</span> <span class="hljs-string">"bastion_service_subnet_name"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"AzureBastionSubnet"</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Dedicated Subnet for Azure Bastion Service"</span>
}
<span class="hljs-string">variable</span> <span class="hljs-string">"bastion_service_address_prefixes"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.100.0/27"</span>]
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Bastion Service Address Space"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"bastionhost_vm_name"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"linuxbastion"</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Bastion Host VM Name"</span>

}
<span class="hljs-string">variable</span> <span class="hljs-string">"bastionhost_vm_hostname"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"linuxbastion"</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Bastion Host VM Name"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"linux_bastionhost_publicip"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"linux_bastionhost_publicip"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Linux VM BastionHost Public IP"</span>

}
<span class="hljs-string">variable</span> <span class="hljs-string">"linux_bastionhost_nic_name"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"bastionhost_linuxvm_nic"</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"value"</span>

}
</code></pre>
<h2 id="heading-moving-private-keys-to-bastion-host">Moving Private Keys to Bastion Host</h2>
<p>Since this Linux VM will function as the Bastion server, it requires the necessary credentials to establish secure connections to the workload VM. To achieve this, we need to transfer the previously generated SSH private keys to the Bastion host. These keys will allow the Bastion server to authenticate against the workload VM, ensuring secure and password-less SSH connections when accessing the workload VM through the Bastion server.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Movoing the private ssh key is important as we are going to take connect from this bastion host vm to our actual workload vm in web tier</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"null_resource"</span> <span class="hljs-string">"null_copy_ssh_privatekey_to_bastionhost"</span> {

    <span class="hljs-string">depends_on</span> <span class="hljs-string">=</span> [ <span class="hljs-string">azurerm_linux_virtual_machine.bastion_host_linuxvm</span> ] <span class="hljs-comment"># This resource block needs to be executed only after vm creation</span>
    <span class="hljs-string">connection</span> {
      <span class="hljs-string">host</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine.bastion_host_linuxvm.public_ip_address</span>
      <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">"ssh"</span>
      <span class="hljs-string">user</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine.bastion_host_linuxvm.admin_username</span>
      <span class="hljs-string">private_key</span> <span class="hljs-string">=</span> <span class="hljs-string">file("$</span>{<span class="hljs-string">path.module</span>}<span class="hljs-string">/ssh-keys/terraform-azure.pem")</span>
    }
    <span class="hljs-string">provisioner</span> <span class="hljs-string">"file"</span> {
        <span class="hljs-string">source</span> <span class="hljs-string">=</span> <span class="hljs-string">"ssh-keys/terraform-azure.pem"</span>
        <span class="hljs-string">destination</span> <span class="hljs-string">=</span> <span class="hljs-string">"/tmp/terraform-azure.pem"</span>
    }
    <span class="hljs-comment">## Remote Exec provisioner </span>
    <span class="hljs-string">provisioner</span> <span class="hljs-string">"remote-exec"</span> {
        <span class="hljs-string">inline</span> <span class="hljs-string">=</span> [ 
            <span class="hljs-string">"sudo chmod 400"</span>
        ]

    }

}
</code></pre>
<p>In this section, we are automating the process of copying the SSH private key to the Bastion Host using the <code>null_resource</code> in Terraform. Here’s the breakdown:</p>
<ol>
<li><p><strong>Resource Dependency</strong>: The <code>null_resource</code> block is set to execute only after the Bastion Host Linux VM (<code>azurerm_linux_virtual_machine.bastion_host_linuxvm</code>) has been created using the <code>depends_on</code> argument. This ensures the resource execution order.</p>
</li>
<li><p><strong>SSH Connection Configuration</strong>: A secure SSH connection is established with the Bastion Host by providing the public IP address, username, and private key (<code>terraform-azure.pem</code>). This connection will allow further actions on the Bastion Host.</p>
</li>
<li><p><strong>Provisioning File</strong>: Using the <code>file</code> provisioner, the SSH private key (<code>terraform-azure.pem</code>) is copied from the local machine to the Bastion Host’s <code>/tmp</code> directory. This key will be used later for secure SSH authentication to the workload VM.</p>
</li>
<li><p><strong>Remote Execution</strong>: A <code>remote-exec</code> provisioner is used to change the permissions of the copied private key on the Bastion Host to <code>chmod 400</code>, ensuring secure access.</p>
</li>
</ol>
<h2 id="heading-bastion-host-public-ip-output">Bastion Host Public IP Output</h2>
<pre><code class="lang-yaml"><span class="hljs-string">output</span> <span class="hljs-string">"bastion_host_publicip"</span> {
    <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine.bastion_host_linuxvm.public_ip_address</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Output bastion host linux vm ip"</span>

}
</code></pre>
<p>The public IP of this machine will be displayed on the console so that we can use it to connect to SSH.</p>
<p>This setup creates a fully functional Bastion Host isolated within its subnet, equipped with secure SSH access, and capable of routing secure connections to other VMs in the network.</p>
<h1 id="heading-deploy-azure-bastion-service">Deploy Azure Bastion Service</h1>
<p>Azure Bastion is a fully managed Platform-as-a-Service (PaaS) offering by Microsoft that provides secure and seamless RDP and SSH access to virtual machines (VMs) over SSL directly from the Azure portal. With Azure Bastion, you don't need to expose your VMs to the internet or use a public IP address for secure access, as it creates a secure gateway to connect to your VMs in a private network.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Azure Bastion Subnet</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet"</span> <span class="hljs-string">"az_bastion_service_subnet"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">var.az_bastion_service_subnet_name</span> <span class="hljs-comment"># Name cant be anything other than "AzureBastionSubnet".</span>
    <span class="hljs-string">address_prefixes</span> <span class="hljs-string">=</span> <span class="hljs-string">var.az_bastion_service_address_prefixes</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">virtual_network_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_virtual_network.vnet.name</span>

}

<span class="hljs-comment"># Azure Bastion Public IP</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_public_ip"</span> <span class="hljs-string">"az_bastion_public_ip"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-${var.az_bastion_service_public_ip}"</span>
    <span class="hljs-string">allocation_method</span> <span class="hljs-string">=</span> <span class="hljs-string">"Static"</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">sku</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard"</span>

}
<span class="hljs-comment"># Azure Bastion Service Host</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_bastion_host"</span> <span class="hljs-string">"az_bastion_host_svc"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-${var.az_bastion_service_name}"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">sku</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard"</span>
    <span class="hljs-string">ip_configuration</span> {
      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"bastion_ip_config"</span>
      <span class="hljs-string">subnet_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.az_bastion_service_subnet.id</span>
      <span class="hljs-string">public_ip_address_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_public_ip.az_bastion_public_ip.id</span>
    }

}
</code></pre>
<h3 id="heading-1-azure-bastion-subnet-configuration">1. <strong>Azure Bastion Subnet Configuration</strong>:</h3>
<ul>
<li><p>The subnet for Azure Bastion must be explicitly named <code>AzureBastionSubnet</code>, which is a mandatory naming convention for deploying the service.</p>
</li>
<li><p>This subnet must be created within the same virtual network (VNet) where the workload VMs reside, ensuring secure connectivity within the private network.</p>
</li>
<li><p>The subnet will be assigned a unique address prefix for IP allocation, defined through the <code>az_bastion_service_address_prefixes</code> variable.</p>
</li>
</ul>
<h3 id="heading-2-azure-bastion-public-ip">2. <strong>Azure Bastion Public IP</strong>:</h3>
<ul>
<li><p>A <strong>Static Public IP</strong> is required for Azure Bastion, which will act as the public-facing endpoint through which users securely connect to their VMs over SSL.</p>
</li>
<li><p>The public IP is created using the <code>Standard</code> SKU to support secure and scalable connections.</p>
</li>
<li><p>The public IP will be allocated to the Bastion host, ensuring that all SSH and RDP connections are routed through this IP address.</p>
</li>
</ul>
<h3 id="heading-3-azure-bastion-host-service-configuration">3. <strong>Azure Bastion Host Service Configuration</strong>:</h3>
<ul>
<li><p>The Azure Bastion Host service itself is defined, which is deployed within the previously created <code>AzureBastionSubnet</code>.</p>
</li>
<li><p>The service is linked to the public IP and subnet created earlier.</p>
</li>
<li><p>The Bastion host will use the IP configuration block to map the subnet and public IP, enabling secure and seamless access to VMs in the private network.</p>
</li>
<li><p>The SKU of <code>Standard</code> ensures high availability and reliability for connections to the VMs.</p>
</li>
</ul>
<p>This outline walks through the critical steps of setting up Azure Bastion to provide secure, managed RDP and SSH access to Azure VMs without exposing public IPs, ensuring better security practices for your infrastructure.</p>
<h1 id="heading-verification-on-portal">Verification on Portal</h1>
<h2 id="heading-verify-bastion-service">Verify Bastion-Service</h2>
<p><strong>1. No Public IP associated with the workload VM (hr-dev-web_azlinux_vm):</strong></p>
<p>The workload VM, <code>hr-dev-web_azlinux_vm</code>, does not have an associated public IP address. This is crucial for maintaining the security of the infrastructure, as the VM is not directly exposed to the public internet. All access to this VM is now restricted through the private network, with Azure Bastion acting as the intermediary for remote access.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728815871261/71ed61e8-2818-4fa7-8236-6addadb7c736.png" alt class="image--center mx-auto" /></p>
<p><strong>2. Bastion service deployed in</strong> <code>hr-dev-az-vnet-default/AzureBastionSubnet</code> <strong>Subnet:</strong></p>
<p>Azure Bastion has been deployed in a dedicated subnet named <code>AzureBastionSubnet</code> within the virtual network (<code>hr-dev-az-vnet-default</code>). This subnet is a secure location specifically designed for Bastion services, ensuring isolation from the workload VM subnets. It provides a secure path to access virtual machines within the VNet.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728816031189/a6d0e530-f06b-4bcd-87ba-65094d600139.png" alt class="image--center mx-auto" /></p>
<p><strong>3. Explicit Public IP for the Bastion service:</strong></p>
<p>Unlike the workload VM, the Bastion service has been provisioned with an explicit static public IP address. This public IP is required to allow remote access via the Azure portal's Bastion feature, which facilitates secure RDP/SSH connections to the VMs without the need for a public IP on the workload VM itself.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728817390214/bf8ffc60-1e96-4305-a86a-9417641c8194.png" alt class="image--center mx-auto" /></p>
<p><strong>4. Taking the connection using the Bastion service:</strong></p>
<p>Using Azure Bastion, we are able to securely connect to the workload VM (<code>hr-dev-web_azlinux_vm</code>) directly from the Azure portal. This connection uses SSL over port 443, which provides encrypted and secure access to the VM through the Bastion service, without exposing the VM to the public internet.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728817241391/d45e0310-388a-4d04-a922-d3dc2be47915.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728817286386/aa75ef81-2bbb-43f5-b86e-484ea7374e17.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-verify-bastion-host-linux-vm">Verify Bastion Host Linux VM</h2>
<p><strong>1. The Bastion Host for Linux VM has been deployed as below:</strong></p>
<p>The Bastion Host VM, which acts as a jump server, has been successfully deployed in the previously configured Bastion subnet. This VM allows secure SSH access to other workload VMs that do not have public IPs. The Bastion Host serves as an entry point for administrative access to private resources.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728816275387/066116c1-135b-4fdc-9f0a-0a9a7d1a8b8c.png" alt class="image--center mx-auto" /></p>
<p><strong>2. Dedicated network interface for the Bastion Host:</strong></p>
<p>The Bastion Host VM has its own dedicated network interface (NIC), which is associated with a static public IP. This NIC facilitates the secure connection from external networks to the internal private network via the Bastion VM. All traffic to the workload VM will be routed through this NIC.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728816196884/5e282e74-d06e-4668-b936-266215b9af99.png" alt class="image--center mx-auto" /></p>
<p><strong>3. Connecting to the Bastion Host VM (Hostname: linuxbastion):</strong></p>
<p>Using the public IP assigned to the Bastion Host VM, we can establish an SSH connection by using the hostname <code>linuxbastion</code>. This connection is essential as it enables access to the private VMs securely without exposing their IPs directly to the public internet.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728816496903/25756a9b-8c86-4210-9a42-7cde0d122bdc.png" alt class="image--center mx-auto" /></p>
<p><strong>4. Accessing the workload VM (hr-dev-web_azlinux_vm) using private IP:</strong></p>
<p>Once connected to the Bastion Host VM, we can initiate an SSH connection to the workload VM (<code>hr-dev-web_azlinux_vm</code>) using its private IP address. This ensures that all connections to the workload VM are made securely within the internal network, without the need for a public IP on the workload VM.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728817031054/ad15b1d1-ee80-4852-aaff-abd75230701a.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Terraform IAC: Setting Up Azure Virtual Machine in Web-Tier]]></title><description><![CDATA[As part of a 4-tier networking setup, we have already demonstrated the complete networking setup using Terraform. This time, we will extend the architecture by deploying virtual machines into the respective subnets and hosting a static website. This ...]]></description><link>https://www.devopswithritesh.in/terraform-iac-setting-up-azure-virtual-machine-in-web-tier</link><guid isPermaLink="true">https://www.devopswithritesh.in/terraform-iac-setting-up-azure-virtual-machine-in-web-tier</guid><category><![CDATA[TerraformwithAzure]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[Infrastructure management]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Wed, 09 Oct 2024 12:38:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728228910857/a7379de0-f8af-4423-95df-4b29d6ad295b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As part of a 4-tier networking setup, we have already demonstrated the complete networking setup using Terraform. This time, we will extend the architecture by deploying virtual machines into the respective subnets and hosting a static website. This will showcase how the network configuration integrates with the virtual machines, ensuring the proper functionality of each tier while maintaining security and isolation.</p>
<h1 id="heading-components-involved">Components Involved</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728309866187/fa229d1b-4d70-4aeb-905c-20da8dbf168e.png" alt class="image--center mx-auto" /></p>
<p>In this extended article, we are going add additional resources below to make the networking setup consumed</p>
<ul>
<li><p>Public IP</p>
</li>
<li><p>Azure Linux Virtual Machine</p>
</li>
<li><p>Network Interface</p>
</li>
<li><p>Azure Disk</p>
</li>
</ul>
<h1 id="heading-generate-ssh-keys-for-vm-pre-requisite">Generate SSH Keys for VM (Pre-requisite)</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728310804000/e26cf5fe-30db-494b-9b9a-fc0f6464ff26.png" alt class="image--center mx-auto" /></p>
<p>SSH keys are crucial for securely provisioning Azure Linux VMs and will be helpful when establishing a Bastion connection. To generate an SSH key pair for use in Terraform while provisioning an Azure Linux VM, execute the following command:</p>
<pre><code class="lang-yaml"><span class="hljs-string">ssh-keygen</span> <span class="hljs-string">-m</span> <span class="hljs-string">PEM</span> <span class="hljs-string">-t</span> <span class="hljs-string">rsa</span> <span class="hljs-string">-b</span> <span class="hljs-number">4096</span> <span class="hljs-string">-C</span> <span class="hljs-string">"azureuser@myserver"</span> <span class="hljs-string">-f</span> <span class="hljs-string">terraform-azure.pem</span>
</code></pre>
<p>This command generates two key files:</p>
<ul>
<li><p><code>terraform-azure.pem</code>: The private key file used to securely log into the VM.</p>
</li>
<li><p><a target="_blank" href="http://terraform-azure.pem.pub"><code>terraform-azure.pem.pub</code></a>: The public key that will be injected into the VM during provisioning for authentication.</p>
</li>
</ul>
<p>Ensure that the private key has restricted permissions by running:</p>
<pre><code class="lang-yaml"><span class="hljs-string">chmod</span> <span class="hljs-number">400</span> <span class="hljs-string">terraform-azure.pem</span>
</code></pre>
<p>The <code>.pem</code> file allows secure SSH access to the VM, while the <code>.pub</code> The file ensures authentication via Terraform during the VM provisioning. This setup is essential for enabling a secure and functional SSH connection.</p>
<h1 id="heading-create-public-ip-resource">Create Public IP Resource</h1>
<p>In this section, we are creating an Azure Public IP resource for the Linux VM that will be deployed in the Web Subnet. The <code>azurerm_public_ip</code> resource is used to allocate a public IP that allows the VM to be accessible from the internet. The <code>name</code> of the public IP is dynamically constructed using the defined resource name prefix and a variable for the public IP name.</p>
<p>We specify the <code>allocation_method</code>, which determines if the IP is static or dynamic, and set the <code>sku</code> (Standard or Basic) based on the requirements. The <code>domain_name_label</code> is generated using a random string for uniqueness and combined with a predefined domain name, ensuring a unique DNS name for accessing the VM. This public IP will be linked to the Linux VM during deployment, enabling external access.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_public_ip"</span> <span class="hljs-string">"web_linuxvm_publicip"</span> {


    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-${var.web_linuxvm_publicip_name}"</span>

    <span class="hljs-string">allocation_method</span> <span class="hljs-string">=</span> <span class="hljs-string">var.allocation_type</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

    <span class="hljs-string">sku</span> <span class="hljs-string">=</span> <span class="hljs-string">var.sku_type</span>
    <span class="hljs-string">domain_name_label</span> <span class="hljs-string">=</span> <span class="hljs-string">"${random_string.random_name.id}-${var.domain_name}"</span>

}
</code></pre>
<h2 id="heading-public-ip-resource-input-variables">Public IP Resource - Input Variables</h2>
<p>The below input variables are being referenced in the main resource block shown above</p>
<pre><code class="lang-yaml"><span class="hljs-string">variable</span> <span class="hljs-string">"web_linuxvm_publicip_name"</span> {
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"web-linuxvm-publicip"</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"allocation_type"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"Static"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"sku_type"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"domain_name"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"devopswithritesh.in"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>

}
</code></pre>
<h1 id="heading-create-network-interface-cardnic">Create Network Interface Card(NIC)</h1>
<p>A <strong>Network Interface Card (NIC)</strong> is a crucial component in Azure Virtual Machines (VMs) that allows them to communicate with other resources within your network or the internet. <strong><em>Each Azure VM requires at least one NIC</em></strong>, which connects it to a virtual network (VNet) and enables network traffic flow. NICs facilitate both inbound and outbound traffic by associating with an IP address, typically through public and private IPs, and optionally linking to a Network Security Group (NSG) to manage traffic rules.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_interface"</span> <span class="hljs-string">"web_linuxvm_NIC"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-${var.nic_name}"</span>

    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

   <span class="hljs-comment"># We can add multiple IP configuration for a single VM. We can keep adding multiple ip_configuration blocks</span>
    <span class="hljs-string">ip_configuration</span> {

      <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">var.ip_config_1</span>
      <span class="hljs-string">private_ip_address_allocation</span> <span class="hljs-string">=</span> <span class="hljs-string">var.ip_allocation_type</span>
      <span class="hljs-string">subnet_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.web_subnet.id</span>
      <span class="hljs-string">public_ip_address_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_public_ip.web_linuxvm_publicip.id</span>

     <span class="hljs-comment"># primary = true # this needs to be flagged explicitly when you are having multiple ip_configuration blocks</span>
    }

}
</code></pre>
<p>In this section, we will provision a <strong>Network Interface Card (NIC)</strong> resource in Azure, which is essential for establishing network connectivity for the Azure Linux VM within the web subnet.</p>
<p>The NIC resource is defined using the following parameters:</p>
<ul>
<li><p><strong>Dynamic Naming</strong>: The resource name is constructed dynamically, incorporating a prefix based on the business unit and environment variables. This ensures a consistent and clear naming convention across all resources, facilitating better management and identification.</p>
</li>
<li><p><strong>IP Configuration</strong>: Within the <code>ip_configuration</code> block, we define the network settings for the NIC. This configuration assigns a private IP address dynamically from the specified web subnet, ensuring that the VM has the necessary private network connectivity.</p>
</li>
<li><p><strong>Public IP Association</strong>: The <code>public_ip_address_id</code> parameter links the NIC to the previously created public IP resource. This association allows external connectivity to the VM, enabling it to be accessed from the internet or other external networks.</p>
</li>
<li><p><strong>Multiple IP Configurations</strong>: It is noteworthy that you can enhance the NIC by adding multiple IP configurations. By including additional <code>ip_configuration</code> blocks, you can manage advanced networking scenarios that may require multiple private or public IP addresses for a single NIC.</p>
</li>
</ul>
<h2 id="heading-network-interface-resource-input-variables">Network Interface Resource - Input Variables</h2>
<p>These variables are being referenced in the above resource block</p>
<pre><code class="lang-yaml"><span class="hljs-string">variable</span> <span class="hljs-string">"nic_name"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"linuxvm-nic"</span>
  <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"ip_allocation_type"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"Dynamic"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"ip_config_1"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"web_linuxvm_ip_1"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>

}
</code></pre>
<h1 id="heading-create-nsg-at-the-nic-level">Create NSG at the NIC Level</h1>
<p>Configuring a Network Security Group (NSG) at the Virtual Machine (VM) Network Interface Card (NIC) level is crucial for enhancing the security and management of individual VMs, even when a subnet-level NSG is in place. An NSG at the NIC level allows for specific and granular control over traffic rules for that particular VM, enabling customized security measures tailored to its unique requirements. While a subnet-level NSG provides a baseline security configuration, individual VMs may require distinct access controls, and the NIC-level NSG adds an additional layer of security to enforce these tailored rules. Furthermore, rules defined at the NIC level can take precedence over broader subnet-level rules, ensuring critical services on the VM remain accessible while adhering to overall network security policies.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_group"</span> <span class="hljs-string">"weblinuxvm_nsg"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-wbelinux_nsg"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>

}

<span class="hljs-comment"># Locals block for security rules</span>
<span class="hljs-string">locals</span> {
  <span class="hljs-string">weblinux_inbound_port_map</span> <span class="hljs-string">=</span> {
    <span class="hljs-comment"># priority:port</span>
    <span class="hljs-string">"100"</span> <span class="hljs-string">:</span> <span class="hljs-string">"80"</span>
    <span class="hljs-string">"110"</span> <span class="hljs-string">:</span> <span class="hljs-string">"443"</span>
    <span class="hljs-string">"120"</span> <span class="hljs-string">:</span> <span class="hljs-string">"22"</span>

  }
}
<span class="hljs-comment"># Create Network Security Rule</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_rule"</span> <span class="hljs-string">"weblinuxvm_nsg_rules"</span> {
    <span class="hljs-string">for_each</span> <span class="hljs-string">=</span> <span class="hljs-string">local.weblinux_inbound_port_map</span>
    <span class="hljs-string">access</span> <span class="hljs-string">=</span> <span class="hljs-string">"Allow"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Inbound"</span>
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"-weblinux_rule-port-${each.value}"</span>
    <span class="hljs-string">network_security_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.weblinuxvm_nsg.name</span>
    <span class="hljs-string">priority</span> <span class="hljs-string">=</span> <span class="hljs-string">each.key</span>
    <span class="hljs-string">protocol</span> <span class="hljs-string">=</span> <span class="hljs-string">"Tcp"</span>
    <span class="hljs-string">source_port_range</span>           <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
    <span class="hljs-string">destination_port_range</span>      <span class="hljs-string">=</span> <span class="hljs-string">each.value</span>
    <span class="hljs-string">source_address_prefix</span>       <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
    <span class="hljs-string">destination_address_prefix</span> <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
}

<span class="hljs-comment"># NSG and VM NIC</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_interface_security_group_association"</span> <span class="hljs-string">"associate_weblinux_nsg_nic"</span> {
    <span class="hljs-string">depends_on</span> <span class="hljs-string">=</span> [ <span class="hljs-string">azurerm_network_security_rule.weblinuxvm_nsg_rules</span> ]
    <span class="hljs-string">network_interface_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_interface.web_linuxvm_NIC.id</span>
    <span class="hljs-string">network_security_group_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.weblinuxvm_nsg.id</span>

}
</code></pre>
<p>In this section, we create an explicit Network Security Group (NSG) at the Network Interface Card (NIC) level for our web Linux virtual machine (VM). First, the NSG is defined using the <code>azurerm_network_security_group</code> resource, specifying the location and resource group. A locals block is introduced to define the inbound port rules for the VM, including common ports like 80 (HTTP), 443 (HTTPS), and 22 (SSH). The <code>azurerm_network_security_rule</code> resource applies these rules, ensuring that only allowed traffic reaches the VM. Lastly, the <code>azurerm_network_interface_security_group_association</code> resource is used to associate the NSG with the VM's NIC. This setup allows for granular control over traffic specific to this VM, enhancing its security by applying custom inbound rules.</p>
<h1 id="heading-deploying-azure-linux-virtual-machine-with-nic-and-webpage-hosting">Deploying Azure Linux Virtual Machine with NIC and Webpage Hosting</h1>
<p>In this section, we’ll be deploying an Azure Linux Virtual Machine (VM) that is configured with a Network Interface Card (NIC) and a custom script for hosting a webpage. The key components of this deployment include setting up the Linux VM, attaching it to the appropriate network interface, configuring SSH key-based access, and running custom initialization scripts for hosting a webpage.</p>
<h4 id="heading-vm-configuration">VM Configuration</h4>
<p>We define the Azure Linux VM using the <code>azurerm_linux_virtual_machine</code> resource. The VM is created in the specified resource group and location, with parameters such as <code>size</code>, <code>admin_username</code>, and <code>computer_name</code> defined by variables. Notably, password authentication is disabled, and SSH key-based authentication is enforced for security, with the <code>admin_ssh_key</code> block defining the public SSH key that is placed on the VM for secure login.</p>
<h4 id="heading-network-configuration">Network Configuration</h4>
<p>The VM is attached to the previously created NIC, which connects the VM to the web subnet. The <code>network_interface_ids</code> field links the NIC to the VM, ensuring that the machine can communicate with the network and be accessed through the public IP associated with the NIC.</p>
<h4 id="heading-os-disk-and-image">OS Disk and Image</h4>
<p>The operating system for the VM is defined using the <code>source_image_reference</code> block, specifying RedHat Enterprise Linux (RHEL) with the latest version as the base image. The OS disk configuration uses <code>Standard_LRS</code> for storage, which is sufficient for typical web hosting use cases.</p>
<h4 id="heading-custom-script-for-webpage-hosting">Custom Script for Webpage Hosting</h4>
<p>The <code>custom_data</code> block is used to pass a custom script (<a target="_blank" href="http://webvm.sh"><code>webvm.sh</code></a>) that will be executed when the VM is provisioned. This script, encoded using <code>filebase64()</code>, will automate the deployment of a simple webpage on the VM. This approach ensures that the VM is initialized with the necessary software and configurations right after provisioning, simplifying the process of setting up a web server on the Linux VM.</p>
<p>By using a combination of predefined VM configurations and a custom script, this setup enables a fully automated deployment of a web-hosting environment in Azure. This is ideal for environments that require scalable and easily reproducible infrastructure.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Resource: Azure linux Virtual Machine</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_linux_virtual_machine"</span> <span class="hljs-string">"web_linuxvm"</span> {

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-${var.vm_name}"</span>
    <span class="hljs-string">computer_name</span> <span class="hljs-string">=</span> <span class="hljs-string">var.host_name</span>
    <span class="hljs-string">admin_username</span> <span class="hljs-string">=</span> <span class="hljs-string">var.linux_admin_username</span>
    <span class="hljs-string">size</span> <span class="hljs-string">=</span> <span class="hljs-string">var.vm_size</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">disable_password_authentication</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>

    <span class="hljs-string">network_interface_ids</span> <span class="hljs-string">=</span>[
        <span class="hljs-string">azurerm_network_interface.web_linuxvm_NIC.id</span>
    ]

    <span class="hljs-string">admin_ssh_key</span> {
      <span class="hljs-string">username</span> <span class="hljs-string">=</span> <span class="hljs-string">var.linux_admin_username</span>
      <span class="hljs-string">public_key</span> <span class="hljs-string">=</span> <span class="hljs-string">file("$</span>{<span class="hljs-string">path.module</span>}<span class="hljs-string">ssh-keys/terraform-azure.pub")</span>
    }
    <span class="hljs-string">os_disk</span> {
      <span class="hljs-string">caching</span> <span class="hljs-string">=</span> <span class="hljs-string">"ReadWrite"</span>
      <span class="hljs-string">storage_account_type</span> <span class="hljs-string">=</span> <span class="hljs-string">"Standard_LRS"</span>
    }

    <span class="hljs-string">source_image_reference</span> {
      <span class="hljs-string">publisher</span> <span class="hljs-string">=</span> <span class="hljs-string">"RedHat"</span>
      <span class="hljs-string">offer</span> <span class="hljs-string">=</span> <span class="hljs-string">"RHEL"</span>
      <span class="hljs-string">sku</span> <span class="hljs-string">=</span> <span class="hljs-string">"83-gen2"</span>
      <span class="hljs-string">version</span> <span class="hljs-string">=</span> <span class="hljs-string">"latest"</span>
    }   

<span class="hljs-string">/*</span> <span class="hljs-string">custom_data</span> <span class="hljs-string">accepts</span> <span class="hljs-string">only</span> <span class="hljs-string">encoded</span> <span class="hljs-string">sensitive</span> <span class="hljs-string">content</span> <span class="hljs-string">that</span> <span class="hljs-string">should</span> <span class="hljs-string">be</span> <span class="hljs-string">base64</span> <span class="hljs-string">encoded.</span> <span class="hljs-string">To</span> <span class="hljs-string">achieve</span> <span class="hljs-string">that</span> <span class="hljs-string">we</span> <span class="hljs-string">need</span> <span class="hljs-string">to</span> <span class="hljs-string">use</span> <span class="hljs-string">filebase64()</span> <span class="hljs-string">function.</span> <span class="hljs-string">In</span> <span class="hljs-string">this</span> <span class="hljs-string">approach</span> <span class="hljs-string">we</span> <span class="hljs-string">can</span> <span class="hljs-string">not</span> <span class="hljs-string">refer</span> <span class="hljs-string">to</span> <span class="hljs-string">any</span> <span class="hljs-string">resouce</span> <span class="hljs-string">attribute</span> <span class="hljs-string">inside</span> <span class="hljs-string">the</span> <span class="hljs-string">script.</span> <span class="hljs-string">We</span> <span class="hljs-string">can</span> <span class="hljs-string">also</span> <span class="hljs-string">use</span> <span class="hljs-string">locals</span> <span class="hljs-string">block</span> <span class="hljs-string">with</span> <span class="hljs-string">custom</span> <span class="hljs-string">data</span> <span class="hljs-string">which</span> <span class="hljs-string">we'll</span> <span class="hljs-string">be</span> <span class="hljs-string">using</span> <span class="hljs-string">in</span> <span class="hljs-string">other</span> <span class="hljs-string">advanced</span> <span class="hljs-string">resource</span> <span class="hljs-string">creations*/</span>

    <span class="hljs-string">custom_data</span> <span class="hljs-string">=</span> <span class="hljs-string">filebase64("$</span>{<span class="hljs-string">path.module</span>}<span class="hljs-string">/app-script/webvm.sh")</span> <span class="hljs-comment"># One way of passing custom script</span>

}
</code></pre>
<h2 id="heading-output-values-of-virtual-machine">Output Values of Virtual Machine</h2>
<pre><code class="lang-yaml"><span class="hljs-string">output</span> <span class="hljs-string">"weblinux_publicip"</span> {
    <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_public_ip.web_linuxvm_publicip.ip_address</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Public Ip of the VM"</span>

}

<span class="hljs-comment">#Network interface id</span>
<span class="hljs-string">output</span> <span class="hljs-string">"weblinux_networkinterface_id"</span> {
    <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_interface.web_linuxvm_NIC.id</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Interface id of NIC"</span>

}
<span class="hljs-string">output</span> <span class="hljs-string">"weblinux_networkinterface_privateip"</span> {
    <span class="hljs-string">value</span> <span class="hljs-string">=</span> [<span class="hljs-string">azurerm_network_interface.web_linuxvm_NIC.private_ip_addresses</span>]
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Interface private IPs"</span>
}

<span class="hljs-string">output</span> <span class="hljs-string">"weblinux_vm_id"</span> {
    <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine.web_linuxvm.id</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"VM Id"</span>

}
<span class="hljs-string">output</span> <span class="hljs-string">"weblinux_vm_publicip"</span> {
    <span class="hljs-string">value</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_linux_virtual_machine.web_linuxvm.public_ip_address</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"VM Public IP address"</span>

}
</code></pre>
<h1 id="heading-resources-on-azure-portal">Resources on Azure Portal</h1>
<p>The desired resources have been deployed successfully in the web subnet with respective configurations.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728472453185/2794b02a-60d7-4a9c-af8e-fd6d48ac6e74.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728472976261/af97a143-d72b-43ee-bc0c-a974a7e4f751.png" alt class="image--center mx-auto" /></p>
<p>In conclusion, we successfully established an SSH connection to the Azure Linux Virtual Machine using SSH-key based password-less authentication.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728473571129/b9ab8850-489f-40cf-996d-3d5ae903bb25.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728474254260/e9aec12b-3d89-498a-a8e7-d05fc0308cff.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728474340547/33938e5d-e009-45d2-a12b-4d320d7be82e.png" alt class="image--center mx-auto" /></p>
<p>Additionally, the custom data script executed flawlessly during the VM provisioning process. With the HTTP server properly configured, we were able to access the hosted webpage through the VM's public IP. All necessary resources were deployed as planned, completing the infrastructure setup.</p>
]]></content:encoded></item><item><title><![CDATA[Setting Up a 4-Tier Azure VNet Architecture with Terraform IAC]]></title><description><![CDATA[Architecture

In this article, we will explore the automation of a complete 4-tier networking setup on Azure using Terraform as Infrastructure as Code (IaC). The setup will include a resource group containing a Virtual Network (VNet), which will host...]]></description><link>https://www.devopswithritesh.in/setting-up-a-4-tier-azure-vnet-architecture-with-terraform-iac</link><guid isPermaLink="true">https://www.devopswithritesh.in/setting-up-a-4-tier-azure-vnet-architecture-with-terraform-iac</guid><category><![CDATA[Terraform]]></category><category><![CDATA[Azure]]></category><category><![CDATA[#InfrastructureAsCode]]></category><category><![CDATA[#InfrastructureAsCode #IaC #ConfigurationManagement #DevOps #CloudComputing #Automation #ITInfrastructure #ContinuousIntegration #ContinuousDelivery #TechTools #CloudServices #SoftwareDevelopment #DeploymentAutomation #ITOperations #DigitalTransformation]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Sun, 06 Oct 2024 05:20:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727865775731/be54002e-ab36-4cb8-a5a7-9cb714cde934.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-architecture">Architecture</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727865732475/335801ec-f47a-4680-b1f8-8c23fac55fbf.png" alt class="image--center mx-auto" /></p>
<p>In this article, we will explore the automation of a complete 4-tier networking setup on Azure using Terraform as Infrastructure as Code (IaC). The setup will include a resource group containing a Virtual Network (VNet), which will host four distinct subnets: Web-Tier Subnet, App-Tier Subnet, DB-Tier Subnet, and Bastion Host Subnet. Each subnet will have its own corresponding Network Security Group (NSG) that is configured with security best practices, and the NSGs will be associated with their respective subnets to ensure a secure and well-architected infrastructure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727864962448/72bb887b-03d4-44f1-b200-e4d497a68ea4.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-terraform-settingsversionstf">Terraform Settings(<code>versions.tf</code> )</h1>
<p>The <a target="_blank" href="http://version.tf"><code>version.tf</code></a> file contains configurations specifying the Terraform and provider versions, which help Terraform download the required dependencies and maintain compatibility across environments. By defining these versions, the file ensures that Terraform uses the correct versions of the tools and providers, enabling a stable and predictable infrastructure deployment process.</p>
<pre><code class="lang-yaml"><span class="hljs-string">terraform</span> {
  <span class="hljs-string">required_version</span> <span class="hljs-string">=</span> <span class="hljs-string">"~&gt;1.5.6"</span> <span class="hljs-comment"># Minor version upgrades are allowed</span>
  <span class="hljs-string">required_providers</span> {
    <span class="hljs-string">azurerm=</span>{
        <span class="hljs-string">source</span>  <span class="hljs-string">=</span> <span class="hljs-string">"hashicorp/azurerm"</span>
        <span class="hljs-string">version</span> <span class="hljs-string">=</span> <span class="hljs-string">"~&gt;4.3.0"</span>
    }

    <span class="hljs-string">random=</span>{
      <span class="hljs-string">source</span> <span class="hljs-string">=</span> <span class="hljs-string">"hashicorp/random"</span>
      <span class="hljs-string">version</span> <span class="hljs-string">=</span> <span class="hljs-string">"&gt;=3.6.0"</span>
    }
  }

}

<span class="hljs-string">provider</span> <span class="hljs-string">"azurerm"</span> {
    <span class="hljs-string">features</span> {}  
    <span class="hljs-string">subscription_id</span> <span class="hljs-string">="XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX"</span>
}
</code></pre>
<h1 id="heading-random-resource">Random Resource</h1>
<p>In Terraform, a <strong>random resource</strong> is a type of resource that generates random values, which can be used for various purposes such as passwords, unique names, or tokens. The random values created by these resources are stable between runs, meaning that unless explicitly removed or updated, they persist between Terraform applies. This is helpful for managing resources that require unique or unpredictable inputs.</p>
<h2 id="heading-use-case">Use case</h2>
<p>In Azure, when creating a storage account, the account name must be unique across the entire Azure cloud. To ensure this uniqueness, we can leverage Terraform's <code>random_string</code> resources. By generating a random string, we can append or prefix it to the storage account name, ensuring that the name remains unique and compliant with Azure's global naming requirements.</p>
<p>In this demo, we’ll utilize the <code>random_string</code> resource to create unique Resource Groups, avoiding potential resource conflicts that may occur after destruction. This uniqueness helps prevent issues caused by Azure caching, which can lead to errors when provisioning new instances multiple times in quick succession. By ensuring unique names for each resource group, we can mitigate these conflicts and ensure smoother deployments.</p>
<h1 id="heading-locals">locals</h1>
<pre><code class="lang-yaml"><span class="hljs-string">locals</span> {
  <span class="hljs-string">owners</span>               <span class="hljs-string">=</span> <span class="hljs-string">var.business_unit</span>
  <span class="hljs-string">environment</span>          <span class="hljs-string">=</span> <span class="hljs-string">var.environment_dev</span>
  <span class="hljs-string">resource_name_prefix</span> <span class="hljs-string">=</span> <span class="hljs-string">"${var.business_unit}-${var.environment_dev}"</span>

  <span class="hljs-string">common_tags</span> <span class="hljs-string">=</span> {
    <span class="hljs-string">owners</span>      <span class="hljs-string">=</span> <span class="hljs-string">local.owners</span>,
    <span class="hljs-string">environment</span> <span class="hljs-string">=</span> <span class="hljs-string">local.</span> <span class="hljs-string">Environment</span>
  }

}
</code></pre>
<p>In the provided <code>locals</code> block, local variables are defined to streamline and enhance the organization of the Terraform configuration. The <code>owners</code> and <code>environment</code> locals are derived from input variables, ensuring that key attributes can be referenced easily throughout the code. A <code>resource_name_prefix</code> is created by concatenating the business unit and environment, which aids in creating a consistent naming convention for resources. Additionally, a <code>common_tags</code> map is established to standardize tagging across resources, incorporating ownership and environment details. This approach promotes DRY (Don't Repeat Yourself) principles, making the code more maintainable and reducing the likelihood of errors.</p>
<h1 id="heading-design-virtual-network">Design Virtual Network</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728191179190/a9117b12-bb82-461e-a993-c8512e9e6896.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-defining-input-variables-for-vnet">Defining Input Variables for Vnet</h2>
<p>Maintaining and reusing Terraform code can become a significant challenge when provisioning major resources. Defining input variables helps improve maintainability by allowing centralized management of values, making the code easier to understand and debug when needed. Input variables also play a crucial role in the reusability of Terraform modules, enabling quick adjustments and modifications to resources without rewriting the code. This flexibility makes resource provisioning more efficient and adaptable to change.</p>
<p>As per the design, we will create a Virtual Network (VNet) with four subnets—one each for the web, app, database, and bastion tiers. To achieve this, we'll define the following input variables: the VNet name, subnet names, and the address space for both the VNet and its respective subnets. These variables will allow us to manage and configure the network infrastructure efficiently, ensuring flexibility and consistency across deployments.</p>
<p>For example, the required variables might include:</p>
<ul>
<li><p><strong>VNet name:</strong> The name of the virtual network.</p>
</li>
<li><p><strong>Subnet names:</strong> Names for the Web, App, DB, and Bastion subnets.</p>
</li>
<li><p><strong>VNet address space:</strong> The CIDR block defining the overall VNet address space.</p>
</li>
<li><p><strong>Subnet address spaces:</strong> CIDR blocks for each subnet, ensuring proper segmentation across the web, app, DB, and bastion tiers.</p>
</li>
</ul>
<p>By defining these input variables, we ensure easy adjustments and reusability of the Terraform code while adhering to best practices in network design.</p>
<pre><code class="lang-yaml"><span class="hljs-string">variable</span> <span class="hljs-string">"vnet_name"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"az-vnet-default"</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Virtual network name"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"vnet_address_space"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.0.0/16"</span>] 
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Virtual network address space"</span>
  <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">list(string)</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"web_subnet_name"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"websubnet"</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Virtual Network Web subnet name"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"web_subnet_address"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.1.0/24"</span>]
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Virtual network web subnet Address Space"</span>
  <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">list(string)</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"app_subnet_name"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"appsubnet"</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Virtual Network App Subnet"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"app_subnet_address"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.11.0/24"</span>]
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Virtual network app subnet address space"</span>
  <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">list(string)</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"db_subnet_name"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"dbsubnet"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Virtual Network DB subnet"</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"db_subnet_address"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.21.0/24"</span>]
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">list(string)</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Virtual network Database Address space"</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"bastion_subnet_name"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"bastionsubnet"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">string</span>
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Virtual network bastion subnet name"</span>
}
<span class="hljs-string">variable</span> <span class="hljs-string">"bastion_subnet_address"</span> {
    <span class="hljs-string">default</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.100.0/24"</span>]
    <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Bastion subnet address space"</span>
    <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">list(string)</span>

}
</code></pre>
<h2 id="heading-create-vnet-resource">Create Vnet Resource</h2>
<p>Now, we’ll proceed to create the Virtual Network (VNet) using the <code>azurerm_virtual_network</code> resource in Terraform, incorporating the necessary input variables and locals for flexibility and scalability. By leveraging these variables and locals, we ensure that the VNet is configured based on predefined values such as the VNet name, address space, and other attributes.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_virtual_network"</span> <span class="hljs-string">"vnet"</span> {

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${local.resource_name_prefix}-${var.vnet_name}"</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>
    <span class="hljs-string">address_space</span> <span class="hljs-string">=</span> <span class="hljs-string">var.vnet_address_space</span>
    <span class="hljs-string">tags</span> <span class="hljs-string">=</span> <span class="hljs-string">local.common_tags</span>
}
</code></pre>
<p>This configuration uses input variables and locals like <code>var.vnet_name</code> and <code>var.vnet_address_space</code>, <code>local.resource_name_prefix</code> allowing you to easily modify the network parameters without changing the core code. The next steps will involve defining subnets within this VNet, using a similar approach with appropriate input variables.</p>
<h2 id="heading-create-subnet-amp-network-security-group">Create Subnet &amp; Network Security Group</h2>
<p>Our next goal is to create Subnets and Network Security Groups (NSGs) for the Web, App, DB, and Bastion tiers, define the respective security rules, and associate them accordingly. We'll use the following Terraform resources to achieve this:</p>
<ol>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet"><code>azurerm_subnet</code></a>: To create subnets for each tier (Web, App, DB, Bastion).</p>
</li>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_security_group"><code>azurerm_network_security_group</code></a>: To create the NSGs for each subnet.</p>
</li>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_security_rule"><code>azurerm_network_security_rule</code></a>: To define security rules (both inbound and outbound) within the NSGs.</p>
</li>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet_network_security_group_association"><code>azurerm_subnet_network_security_group_association</code></a>: To associate each NSG with its respective subnet.</p>
</li>
</ol>
<p>By using these resources, we can set up secure network segmentation, ensuring that each subnet is protected according to best practices.</p>
<h3 id="heading-create-web-tier-subnet-amp-nsg">Create Web-Tier Subnet &amp; NSG</h3>
<pre><code class="lang-yaml"><span class="hljs-comment"># Create Web-Tier Subnet</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet"</span> <span class="hljs-string">"web_subnet"</span> {

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_virtual_network.vnet.name}-${var.web_subnet_name}"</span>
    <span class="hljs-string">virtual_network_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_virtual_network.vnet.name</span>

    <span class="hljs-string">address_prefixes</span> <span class="hljs-string">=</span> <span class="hljs-string">var.web_subnet_address</span>       <span class="hljs-comment"># Referenced from vnet-input-variables</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}
<span class="hljs-comment"># Create NSG for web_subnet</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_group"</span> <span class="hljs-string">"web_snet_nsg"</span> {

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_subnet.web_subnet.name}-nsg"</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}

<span class="hljs-comment"># Associate web_subnet with web_snet_nsg</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet_network_security_group_association"</span> <span class="hljs-string">"associate_websnet_webnsg"</span> {
  <span class="hljs-string">depends_on</span> <span class="hljs-string">=</span> [ <span class="hljs-string">azurerm_network_security_rule.web_nsg_rules_inbound</span> ]
  <span class="hljs-string">subnet_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.web_subnet.id</span>
  <span class="hljs-string">network_security_group_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.web_snet_nsg.id</span>

}

<span class="hljs-comment"># Locals block for security rules</span>
<span class="hljs-string">locals</span> {
  <span class="hljs-string">web_inbound_port_map</span> <span class="hljs-string">=</span> {
  <span class="hljs-comment"># priority:port</span>
    <span class="hljs-string">"100"</span><span class="hljs-string">:"80"</span>
    <span class="hljs-string">"110"</span><span class="hljs-string">:"443"</span>
    <span class="hljs-string">"120"</span><span class="hljs-string">:"22"</span>

  }
}
<span class="hljs-comment"># Create NSG Rules using azurerm_network_security_rule resource</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_rule"</span> <span class="hljs-string">"web_nsg_rules_inbound"</span> {
    <span class="hljs-string">for_each</span> <span class="hljs-string">=</span> <span class="hljs-string">local.web_inbound_port_map</span>

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"Rule_Port_${each.value}"</span>
    <span class="hljs-string">access</span> <span class="hljs-string">=</span> <span class="hljs-string">"Allow"</span>
    <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Inbound"</span>
    <span class="hljs-string">network_security_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_ntetwork_security_group.web_snet_nsg.name</span>
    <span class="hljs-string">priority</span> <span class="hljs-string">=</span> <span class="hljs-string">each.key</span>
    <span class="hljs-string">protocol</span> <span class="hljs-string">=</span> <span class="hljs-string">"Tcp"</span>
    <span class="hljs-string">source_port_range</span> <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
    <span class="hljs-string">destination_port_range</span> <span class="hljs-string">=</span> <span class="hljs-string">"${each.value}"</span>
    <span class="hljs-string">source_address_prefix</span> <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}
</code></pre>
<ol>
<li><p><strong>Creating the Web-Tier Subnet:</strong> We first define the <code>azurerm_subnet</code> resource to create a Web-Tier subnet within our virtual network. The name of the subnet is dynamically generated by appending the subnet name to the VNet name, ensuring uniqueness. The address prefix is referenced from the input variables, and the subnet is created within the specified resource group.</p>
</li>
<li><p><strong>Creating the Network Security Group (NSG) for Web Subnet:</strong> Using the <code>azurerm_network_security_group</code> resource, we create an NSG specifically for the Web Subnet. This NSG will house all the security rules for managing network traffic to and from the Web-Tier.</p>
</li>
<li><p><strong>Associating the Web Subnet with the NSG:</strong> Once the NSG is created, we associate it with the Web Subnet using <code>azurerm_subnet_network_security_group_association</code>. This ensures that all traffic passing through the Web Subnet is governed by the security rules defined in the corresponding NSG.</p>
</li>
<li><p><strong>Defining Security Rules Using a Locals Block:</strong> To simplify the creation of multiple security rules, we use a <code>locals</code> block to define a map of inbound ports (e.g., HTTP, HTTPS, SSH) along with their priorities. This makes the rules easily configurable and reusable.</p>
</li>
<li><p><strong>Creating NSG Inbound Rules:</strong> With the <code>azurerm_network_security_rule</code> resource, we iterate over the <code>web_inbound_port_map</code> to create individual inbound security rules for the Web-Tier NSG. Each rule allows traffic on a specific port (e.g., 80 for HTTP, 443 for HTTPS) with the corresponding priority, ensuring that the web server is accessible while maintaining security.</p>
</li>
</ol>
<p>These steps collectively demonstrate how we can automate the creation and association of subnets and NSGs, while efficiently managing security rules for the web tier.</p>
<h3 id="heading-create-app-tier-subnet-amp-nsg">Create App-Tier Subnet &amp; NSG</h3>
<pre><code class="lang-yaml"><span class="hljs-comment"># Create App-Tier Subnet</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet"</span> <span class="hljs-string">"app_subnet"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_virtual_network.vnet.name}-${var.app_subnet_name}"</span>
    <span class="hljs-string">virtual_network_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_virtual_network.vnet.name</span>
    <span class="hljs-string">address_prefixes</span> <span class="hljs-string">=</span> <span class="hljs-string">var.app_subnet_address</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}
<span class="hljs-comment"># Create NSG for Subnet</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_group"</span> <span class="hljs-string">"app_snet_nsg"</span> {

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_subnet.app_subnet.name}-nsg"</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}

<span class="hljs-comment"># Associate app_subnet with app_snet_nsg</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet_network_security_group_association"</span> <span class="hljs-string">"associate_appsnet_appnsg"</span> {
  <span class="hljs-string">depends_on</span> <span class="hljs-string">=</span> [ <span class="hljs-string">azurerm_network_security_rule.app_nsg_rules_inbound</span> ]
  <span class="hljs-string">subnet_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.app_subnet.id</span>
  <span class="hljs-string">network_security_group_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.app_snet_nsg.id</span>

}

<span class="hljs-comment"># Locals block for security rules</span>
<span class="hljs-string">locals</span> {
  <span class="hljs-string">app_inbound_port_map</span> <span class="hljs-string">=</span> {
  <span class="hljs-comment"># priority:port</span>
    <span class="hljs-string">"100"</span><span class="hljs-string">:"80"</span>
    <span class="hljs-string">"110"</span><span class="hljs-string">:"443"</span>
    <span class="hljs-string">"120"</span><span class="hljs-string">:"8080"</span>
    <span class="hljs-string">"130"</span><span class="hljs-string">:"22"</span>

  }
}

<span class="hljs-comment"># Create NSG Rules using azurerm_network_security_rule resource</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_rule"</span> <span class="hljs-string">"app_nsg_rules_inbound"</span> {
    <span class="hljs-string">for_each</span> <span class="hljs-string">=</span> <span class="hljs-string">local.app_inbound_port_map</span>

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"Rule_Port_${each.value}"</span>
    <span class="hljs-string">access</span> <span class="hljs-string">=</span> <span class="hljs-string">"Allow"</span>
    <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Inbound"</span>
    <span class="hljs-string">network_security_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.app_snet_nsg.name</span>
    <span class="hljs-string">priority</span> <span class="hljs-string">=</span> <span class="hljs-string">each.key</span>
    <span class="hljs-string">protocol</span> <span class="hljs-string">=</span> <span class="hljs-string">"Tcp"</span>
    <span class="hljs-string">source_port_range</span> <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
    <span class="hljs-string">destination_port_range</span> <span class="hljs-string">=</span> <span class="hljs-string">"${each.value}"</span>
    <span class="hljs-string">source_address_prefix</span> <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}
</code></pre>
<ol>
<li><ol>
<li><p><strong>Creating the App-Tier Subnet and NSG:</strong> Similar to the Web-Tier, the App-Tier subnet is created using <code>azurerm_subnet</code>, and a dedicated NSG is created using <code>azurerm_network_security_group</code> to secure application traffic.</p>
<ol start="2">
<li><p><strong>Associating the App Subnet with the NSG:</strong> The <code>azurerm_subnet_network_security_group_association</code> links the NSG to the App Subnet, ensuring the defined rules apply specifically to application traffic.</p>
</li>
<li><p><strong>Defining App-Tier Specific Ports:</strong> The <code>locals</code> block defines the inbound port map for the App-Tier, which includes typical application traffic ports such as 8080, along with SSH (port 22) for management access.</p>
</li>
<li><p><strong>Creating App-Specific NSG Rules:</strong> Using <code>azurerm_network_security_rule</code>, we dynamically create security rules for the defined ports in the App-Tier, ensuring secure access to application services.</p>
</li>
</ol>
</li>
</ol>
</li>
</ol>
<p>    This approach highlights the different ports and configurations needed for the App-Tier, while following the same structure for subnet and NSG creation.</p>
<h3 id="heading-create-db-tier-subnet-amp-nsg">Create DB-Tier Subnet &amp; NSG</h3>
<pre><code class="lang-yaml"><span class="hljs-comment"># Create DB-Tier Subnet</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet"</span> <span class="hljs-string">"db_subnet"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_virtual_network.vnet.name}-${var.db_subnet_name}"</span>
    <span class="hljs-string">virtual_network_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_virtual_network.vnet.name</span>
    <span class="hljs-string">address_prefixes</span> <span class="hljs-string">=</span> <span class="hljs-string">var.db_subnet_address</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}

<span class="hljs-comment"># Create NSG for Subnet</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_group"</span> <span class="hljs-string">"db_snet_nsg"</span> {

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_subnet.db_subnet.name}-nsg"</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}
<span class="hljs-comment"># Associate db_subnet with db_snet_nsg</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet_network_security_group_association"</span> <span class="hljs-string">"associate_dbsnet_dbnsg"</span> {
  <span class="hljs-string">depends_on</span> <span class="hljs-string">=</span> [ <span class="hljs-string">azurerm_network_security_rule.db_nsg_rules_inbound</span> ]
  <span class="hljs-string">subnet_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.db_subnet.id</span>
  <span class="hljs-string">network_security_group_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.db_snet_nsg.id</span>

}

<span class="hljs-comment"># Locals block for security rules</span>
<span class="hljs-string">locals</span> {
  <span class="hljs-string">db_inbound_port_map</span> <span class="hljs-string">=</span> {
  <span class="hljs-comment"># priority:port</span>
    <span class="hljs-string">"100"</span><span class="hljs-string">:"3306"</span>
    <span class="hljs-string">"110"</span><span class="hljs-string">:"1433"</span>
    <span class="hljs-string">"120"</span><span class="hljs-string">:"5432"</span>
  }
}

<span class="hljs-comment"># Create NSG Rules using azurerm_network_security_rule resource</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_rule"</span> <span class="hljs-string">"db_nsg_rules_inbound"</span> {
    <span class="hljs-string">for_each</span> <span class="hljs-string">=</span> <span class="hljs-string">local.db_inbound_port_map</span>

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"Rule_Port_${each.value}"</span>
    <span class="hljs-string">access</span> <span class="hljs-string">=</span> <span class="hljs-string">"Allow"</span>
    <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Inbound"</span>
    <span class="hljs-string">network_security_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.db_snet_nsg.name</span>
    <span class="hljs-string">priority</span> <span class="hljs-string">=</span> <span class="hljs-string">each.key</span>
    <span class="hljs-string">protocol</span> <span class="hljs-string">=</span> <span class="hljs-string">"Tcp"</span>
    <span class="hljs-string">source_port_range</span> <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
    <span class="hljs-string">destination_port_range</span> <span class="hljs-string">=</span> <span class="hljs-string">"${each.value}"</span>
    <span class="hljs-string">source_address_prefix</span> <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}
</code></pre>
<p>For the <strong>DB-Tier</strong>, we follow a similar approach to creating the subnet and associating an NSG. The <code>azurerm_subnet</code> resource is used to define the DB-Tier subnet, and the <code>azurerm_network_security_group</code> resource creates an NSG specifically for securing database traffic. The <code>azurerm_subnet_network_security_group_association</code> links the NSG to the DB subnet. In this case, the <code>locals</code> block defines ports relevant to database services, such as 3306 for MySQL, 1433 for SQL Server, and 5432 for PostgreSQL. These ports are dynamically handled through the <code>azurerm_network_security_rule</code> resource, ensuring that only the necessary inbound traffic reaches the DB-Tier.</p>
<h3 id="heading-bastion-tier-subnet-amp-nsg">Bastion-Tier Subnet &amp; NSG</h3>
<pre><code class="lang-yaml"><span class="hljs-comment"># Create Bastion-Tier Subnet</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet"</span> <span class="hljs-string">"bastion_subnet"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_virtual_network.vnet.name}-${var.bastion_subnet_name}"</span>
    <span class="hljs-string">virtual_network_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_virtual_network.vnet.name</span>
    <span class="hljs-string">address_prefixes</span> <span class="hljs-string">=</span> <span class="hljs-string">var.bastion_subnet_address</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}

<span class="hljs-comment"># Create NSG for Subnet</span>
<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_group"</span> <span class="hljs-string">"bastion_snet_nsg"</span> {

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"${azurerm_subnet.bastion_subnet.name}-nsg"</span>
    <span class="hljs-string">location</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.location</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}

<span class="hljs-comment"># Associate bastion_subnet with bastion_snet_nsg</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_subnet_network_security_group_association"</span> <span class="hljs-string">"associate_bastionsnet_bastionnsg"</span> {
  <span class="hljs-string">depends_on</span> <span class="hljs-string">=</span> [ <span class="hljs-string">azurerm_network_security_rule.bastion_nsg_rules_inbound</span> ]
  <span class="hljs-string">subnet_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.bastion_subnet.id</span>
  <span class="hljs-string">network_security_group_id</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.bastion_snet_nsg.id</span>

}

<span class="hljs-comment"># Locals block for security rules</span>
<span class="hljs-string">locals</span> {
  <span class="hljs-string">bastion_inbound_port_map</span> <span class="hljs-string">=</span> {
  <span class="hljs-comment"># priority:port</span>
    <span class="hljs-string">"100"</span><span class="hljs-string">:"22"</span>
    <span class="hljs-string">"110"</span><span class="hljs-string">:"3389"</span>
  }
}

<span class="hljs-comment"># Create NSG Rules using azurerm_network_security_rule resource</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"azurerm_network_security_rule"</span> <span class="hljs-string">"bastion_nsg_rules_inbound"</span> {
    <span class="hljs-string">for_each</span> <span class="hljs-string">=</span> <span class="hljs-string">local.bastion_inbound_port_map</span>

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"Rule_Port_${each.value}"</span>
    <span class="hljs-string">access</span> <span class="hljs-string">=</span> <span class="hljs-string">"Allow"</span>
    <span class="hljs-string">direction</span> <span class="hljs-string">=</span> <span class="hljs-string">"Inbound"</span>
    <span class="hljs-string">network_security_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.bastion_snet_nsg.name</span>
    <span class="hljs-string">priority</span> <span class="hljs-string">=</span> <span class="hljs-string">each.key</span>
    <span class="hljs-string">protocol</span> <span class="hljs-string">=</span> <span class="hljs-string">"Tcp"</span>
    <span class="hljs-string">source_port_range</span> <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
    <span class="hljs-string">destination_port_range</span> <span class="hljs-string">=</span> <span class="hljs-string">"${each.value}"</span>
    <span class="hljs-string">source_address_prefix</span> <span class="hljs-string">=</span> <span class="hljs-string">"*"</span>
    <span class="hljs-string">resource_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">azurerm_resource_group.rg.name</span>

}
</code></pre>
<p>For the <strong>Bastion-Tier</strong>, the process mirrors the previous tiers but is specifically tailored for secure management access. We create the <code>azurerm_subnet</code> resource for the Bastion Subnet, followed by the <code>azurerm_network_security_group</code> to control access. The <code>azurerm_subnet_network_security_group_association</code> links the NSG to the Bastion Subnet. The <code>locals</code> block defines inbound ports specifically used for management purposes, such as port 22 for SSH and port 3389 for RDP. These ports are secured using the <code>azurerm_network_security_rule</code> resource to allow secure administrative access to the infrastructure while maintaining control over inbound traffic.</p>
<h1 id="heading-defining-terraformtfvars">Defining <code>terraform.tfvars</code></h1>
<pre><code class="lang-yaml"><span class="hljs-string">business_unit</span>           <span class="hljs-string">=</span> <span class="hljs-string">"hr"</span>
<span class="hljs-string">environment_dev</span>         <span class="hljs-string">=</span> <span class="hljs-string">"dev"</span>
<span class="hljs-string">resource_group_name</span>     <span class="hljs-string">=</span> <span class="hljs-string">"rg-iaas-terraform"</span>
<span class="hljs-string">resource_group_location</span> <span class="hljs-string">=</span> <span class="hljs-string">"eastus"</span>


<span class="hljs-string">vnet_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"az-vnet-default"</span>

<span class="hljs-string">vnet_address_space</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.0.0/16"</span>]

<span class="hljs-string">web_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"websubnet"</span>
<span class="hljs-string">web_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.1.0/24"</span>]

<span class="hljs-string">app_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"appsubnet"</span>
<span class="hljs-string">app_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.11.0/24"</span>]

<span class="hljs-string">db_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"dbsubnet"</span>
<span class="hljs-string">db_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.21.0/24"</span>]

<span class="hljs-string">bastion_subnet_name</span>    <span class="hljs-string">=</span> <span class="hljs-string">"bastionsubnet"</span>
<span class="hljs-string">bastion_subnet_address</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.100.0/24"</span>]
</code></pre>
<p>The <code>terraform.tfvars</code> file is used to define input variables for your Terraform configuration, allowing for the parameterization of resources and environments. In this file, key details such as the business unit, environment, resource group name, and location are specified, along with the names and address spaces for the virtual network and its associated subnets. This setup promotes flexibility and reusability, making it easier to manage infrastructure deployments across different environments. By separating variable values from the main configuration, it enhances maintainability and clarity in your Terraform projects.</p>
<h1 id="heading-output-values">Output Values</h1>
<p>The defined outputs in this Terraform configuration provide essential information about the created resources, enhancing visibility and usability in subsequent deployments or integrations. For the virtual network, the outputs include its name and unique ID, which are crucial for referencing the VNet in other resources or configurations. Similarly, the outputs for the Web Subnet and its associated Network Security Group (NSG) ensure that the subnet name and ID, as well as the NSG name and ID, are readily accessible. This structured approach to defining outputs facilitates easy access to key attributes, enabling seamless interactions with the Terraform state and enhancing the overall management of the deployed infrastructure.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Vnet required outputs</span>

<span class="hljs-string">output</span> <span class="hljs-string">"virtual_network_name"</span> {
  <span class="hljs-string">value</span>       <span class="hljs-string">=</span> <span class="hljs-string">azurerm_virtual_network.vnet.name</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Virtual network name"</span>

}
<span class="hljs-string">output</span> <span class="hljs-string">"virtual_network_id"</span> {
  <span class="hljs-string">value</span>       <span class="hljs-string">=</span> <span class="hljs-string">azurerm_virtual_network.vnet.id</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Virtual network id"</span>

}

<span class="hljs-comment"># Web Subnet outputs</span>
<span class="hljs-string">output</span> <span class="hljs-string">"web_subnet_name"</span> {
  <span class="hljs-string">value</span>       <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.web_subnet.name</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Web subnet name"</span>

}
<span class="hljs-string">output</span> <span class="hljs-string">"web_subnet_id"</span> {
  <span class="hljs-string">value</span>       <span class="hljs-string">=</span> <span class="hljs-string">azurerm_subnet.web_subnet.id</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Web subnet id"</span>

}

<span class="hljs-comment"># web NSG outputs</span>
<span class="hljs-string">output</span> <span class="hljs-string">"web_nsg_name"</span> {
  <span class="hljs-string">value</span>       <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.web_snet_nsg.name</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Web NSG name"</span>

}
<span class="hljs-string">output</span> <span class="hljs-string">"web_nsg_id"</span> {
  <span class="hljs-string">value</span>       <span class="hljs-string">=</span> <span class="hljs-string">azurerm_network_security_group.web_snet_nsg.id</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Web NSG id"</span>

}
</code></pre>
<h1 id="heading-resources-on-azure-portal">Resources on Azure Portal</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728189959650/bb356800-5423-4740-9db3-385584737d68.png" alt class="image--center mx-auto" /></p>
<p>After applying the Terraform configuration, a unique resource group was successfully created, containing the desired resources such as a virtual network (VNet), network security groups (NSGs), and subnets.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728190301947/fb942b7f-a630-41b2-9e99-2b5afc78f2c6.png" alt class="image--center mx-auto" /></p>
<p>Additionally, the NSGs were correctly associated with their respective subnets, ensuring proper traffic control and security within the deployed infrastructure.</p>
]]></content:encoded></item><item><title><![CDATA[Azure Function Serverless Deployment with CICD Explained]]></title><description><![CDATA[Here we are going to explore the power of Azure Functions in serverless architectures, detailing the benefits of using a CI/CD pipeline for seamless deployments. We will guide you through setting up the deployment process, highlight the advantages ov...]]></description><link>https://www.devopswithritesh.in/azure-function-serverless-deployment-with-cicd-explained</link><guid isPermaLink="true">https://www.devopswithritesh.in/azure-function-serverless-deployment-with-cicd-explained</guid><category><![CDATA[azure-devops]]></category><category><![CDATA[Azure]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Azure Functions]]></category><category><![CDATA[Azure Pipelines]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Sun, 08 Sep 2024 07:24:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1725677007631/36f74355-4275-4994-a73f-afe1e2f24086.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Here we are going to explore the <strong>power of Azure Functions</strong> in serverless architectures, detailing the benefits of using a <strong>CI/CD pipeline for seamless deployments</strong>. We will guide you through setting up the deployment process, highlight the advantages over traditional methods, and showcase how serverless technologies when combined with automation, can streamline your cloud deployments for maximum efficiency and reliability.</p>
<h1 id="heading-whats-in-there">What's in There?</h1>
<ul>
<li><p>Azure function, use-case, and its benefits</p>
</li>
<li><p>Azure Function creation and local deployment</p>
</li>
<li><p>Publishing the function to Azure using the CLI tool</p>
</li>
<li><p>Build and release pipeline for building and deploying the code to Azure function</p>
</li>
</ul>
<h1 id="heading-azure-function">Azure Function</h1>
<p>The Azure function is a <strong>serverless, event-driven</strong> compute service that performs a similar job as <strong><em>AWS Lambda</em></strong>. Azure Functions allow you to run code in response to events without the need to provision or manage infrastructure. This makes it ideal for a wide range of use cases, including processing data, integrating systems, and handling tasks that require rapid scaling based on demand.</p>
<p>With Azure Functions, you can trigger execution based on <strong><em>HTTP requests, timers, message queues, or other Azure services</em></strong>. The serverless nature means you only pay for the compute resources when your function is actively running, optimizing both cost and efficiency. This lightweight, flexible platform allows developers to focus on writing code rather than managing the underlying infrastructure, ultimately speeding up development and deployment cycles.</p>
<h1 id="heading-advantages-of-az-function">Advantages of AZ-Function</h1>
<p>Azure Functions offer several advantages over Virtual Machines (VMs) and Containers, particularly in terms of scalability, cost-efficiency, and management simplicity. Here are some key benefits:</p>
<ol>
<li><h3 id="heading-serverless-nature"><strong>Serverless Nature</strong></h3>
</li>
</ol>
<p>Unlike VMs and Containers, Azure Functions are entirely serverless. This means there’s no need to provision, manage, or maintain the underlying infrastructure. The platform automatically scales resources based on demand, making it more efficient for event-driven workloads.</p>
<ol start="2">
<li><h3 id="heading-cost-efficiency"><strong>Cost-Efficiency</strong></h3>
</li>
</ol>
<ul>
<li><p><strong>Pay-Per-Use:</strong> With Azure Functions, you're billed only for the compute time used when the function runs, making it highly cost-effective for infrequent or variable workloads. In contrast, VMs and Containers often incur fixed costs for allocated resources, even idle ones.</p>
</li>
<li><p><strong>No Infrastructure Overhead:</strong> VMs and Containers require managing updates, patches, and scaling resources, which adds to the operational cost. Azure Functions eliminates these overheads.</p>
</li>
</ul>
<ol start="3">
<li><h3 id="heading-automatic-scaling"><strong>Automatic Scaling</strong></h3>
</li>
</ol>
<p>Azure Functions automatically scale based on incoming events or load. VMs and Containers typically require manual configuration to scale, and they may not scale as efficiently in scenarios where workloads are highly unpredictable.</p>
<ol start="4">
<li><h3 id="heading-rapid-deployment-and-development"><strong>Rapid Deployment and Development</strong></h3>
</li>
</ol>
<p>Since Azure Functions focus on executing specific tasks in response to triggers, development and deployment cycles are much faster than setting up VMs or Containers. Functions support multiple programming languages and have built-in integrations with various Azure services, further speeding up the process.</p>
<ol start="5">
<li><h3 id="heading-event-driven-architecture"><strong>Event-Driven Architecture</strong></h3>
</li>
</ol>
<p>Azure Functions are designed to be triggered by events such as HTTP requests, messages in queues, or changes in databases. This is ideal for event-driven architectures where certain actions need to be performed in response to specific events. While Containers and VMs can be event-driven, setting them up requires more configuration and effort.</p>
<ol start="6">
<li><h3 id="heading-maintenance-free"><strong>Maintenance-Free</strong></h3>
</li>
</ol>
<p>VMs and Containers require maintenance, such as patching, updating, and securing the environment. With Azure Functions, the platform handles all these tasks, allowing developers to focus purely on writing and improving their code.</p>
<ol start="7">
<li><h3 id="heading-optimized-for-short-lived-tasks"><strong>Optimized for Short-Lived Tasks</strong></h3>
</li>
</ol>
<p>Azure Functions are ideal for short, <strong><em>stateless, and on-demand tasks</em></strong>, such as processing background jobs or handling API requests. VMs and Containers are better suited for long-running services or applications that need to maintain a certain state.</p>
<h1 id="heading-limitations-of-serverless">Limitations of Serverless</h1>
<p>While serverless computing, like Azure Functions, offers numerous advantages, it also has certain limitations that need to be considered. Here are some key drawbacks:</p>
<ol>
<li><h3 id="heading-cold-starts"><strong>Cold Starts</strong></h3>
</li>
</ol>
<ul>
<li><p><strong>What it is:</strong> Serverless functions may experience a delay when they are invoked after a period of inactivity. This is called a <em>cold start</em>, and it happens because the underlying infrastructure needs time to spin up the necessary resources.</p>
</li>
<li><p><strong>Impact:</strong> This can result in slower response times, which may not be ideal for latency-sensitive applications like real-time services or APIs.</p>
</li>
</ul>
<ol start="2">
<li><h3 id="heading-limited-execution-time"><strong>Limited Execution Time</strong></h3>
</li>
</ol>
<ul>
<li><p><strong>What it is:</strong> Serverless functions often have execution time limits. For example, Azure Functions have a maximum execution time of five minutes in the consumption plan (can be extended in premium plans).</p>
</li>
<li><p><strong>Impact:</strong> Long-running tasks, such as data migrations or batch processing, may not be suitable for serverless environments without breaking them into smaller tasks.</p>
</li>
</ul>
<ol start="3">
<li><h3 id="heading-statelessness"><strong>Statelessness</strong></h3>
</li>
</ol>
<ul>
<li><p><strong>What it is:</strong> Serverless architectures are typically stateless, meaning that data from one execution is not retained in memory for the next execution.</p>
</li>
<li><p><strong>Impact:</strong> Applications that require persistent states or need to maintain data across sessions may require additional services, such as databases or caching solutions, adding complexity.</p>
</li>
</ul>
<ol start="4">
<li><h3 id="heading-limited-control-over-the-infrastructure"><strong>Limited Control Over the Infrastructure</strong></h3>
</li>
</ol>
<ul>
<li><p><strong>What it is:</strong> With serverless, the underlying infrastructure is abstracted away, meaning you have limited control over the computing environment (e.g., memory, CPU configuration).</p>
</li>
<li><p><strong>Impact:</strong> This may be a limitation for applications that need specific hardware configurations or fine-tuned control over the runtime environment.</p>
</li>
</ul>
<ol start="5">
<li><h3 id="heading-potential-for-vendor-lock-in"><strong>Potential for Vendor Lock-In</strong></h3>
</li>
</ol>
<ul>
<li><p><strong>What it is:</strong> Serverless platforms, such as Azure Functions, come with their own unique services, configurations, and optimizations.</p>
</li>
<li><p><strong>Impact:</strong> Migrating to another cloud provider could require significant effort, as you may need to rewrite parts of your application or adjust it to work on a different serverless platform.</p>
</li>
</ul>
<h1 id="heading-azure-function-project-overview">Azure Function Project Overview</h1>
<p>We already have an Azure function project available from <a target="_blank" href="https://github.com/rishabkumar7">Rishab Kumar</a> which we'll be using as part of our CI/CD integration with Azure DevOps. To implement this, we should create an Azure function app where the code can be deployed.</p>
<h2 id="heading-pre-requisite">Pre-requisite</h2>
<ul>
<li><p><a target="_blank" href="https://nodejs.org/">Node.js</a></p>
</li>
<li><p><a target="_blank" href="https://nodejs.org/">Azur</a><a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=linux%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&amp;pivots=programming-language-csharp">e Fu</a><a target="_blank" href="https://nodejs.org/">nctions</a> <a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=linux%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&amp;pivots=programming-language-csharp">Core</a> <a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=linux%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&amp;pivots=programming-language-csharp">Tools</a></p>
</li>
<li><p><a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=linux%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&amp;pivots=programming-language-csharp">Azure CL</a><a target="_blank" href="https://nodejs.org/">I</a></p>
</li>
<li><p><a target="_blank" href="https://nodejs.org/">An</a> <a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=linux%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&amp;pivots=programming-language-csharp">Azure</a> <a target="_blank" href="https://nodejs.org/">accoun</a><a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=linux%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&amp;pivots=programming-language-csharp">t</a> and <a target="_blank" href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=linux%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&amp;pivots=programming-language-csharp">an Azure Blob Storage account.</a></p>
</li>
<li><p>Azure Function App</p>
</li>
</ul>
<h1 id="heading-creating-azure-function-manual">Creating Azure Function (Manual)</h1>
<p>As part of this project, we'll create the function app manually from the portal. However, this process can also be automated using Terraform and integrated further.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725681230533/1a7436ed-456d-4734-aa83-13969917c2fa.png" alt class="image--center mx-auto" /></p>
<p>When configuring your Function App, it's essential to select the <strong>App Service Plan</strong> to ensure sufficient storage and resources for building your application. Since the app we are deploying is built on <strong>NodeJS</strong>, which tends to have a larger footprint, the App Service Plan provides the necessary computing power and storage capacity to handle the heavier size and requirements of the application, ensuring optimal performance during deployment and runtime.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725682158540/dc4a691d-1bca-4e99-bb36-fe7a0b521041.png" alt class="image--center mx-auto" /></p>
<p>Next, proceed with the necessary configurations in all the sections as outlined. Be sure to select <strong>Node.js</strong> as the runtime stack, and choose version <strong>18 LTS</strong>, as it is fully supported by our project. This ensures compatibility and stability for the application during deployment and execution within the Function App environment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725682598315/bc8500dc-e282-4f9f-8d82-a530bbdc3177.png" alt class="image--center mx-auto" /></p>
<p>Make sure public access is enabled in the networking section which will enable us to access the application publicly</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725682862194/fdc92a12-ea1b-44c2-b8be-b270e47cc4c8.png" alt class="image--center mx-auto" /></p>
<p>Once done you can click on review and create which will create the Azure Function.</p>
<h2 id="heading-configuring-connection-string">Configuring Connection String</h2>
<p>You must configure the storage account connection string to enable your Azure Function App to interact with the Azure Storage account. This ensures your function has the necessary permissions to access and manage storage resources.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725779152134/6560a890-5f4e-485e-a75d-324b996cb342.png" alt class="image--center mx-auto" /></p>
<p>ou can achieve this configuration using the Azure Functions Core Tools CLI. Create a <code>local.settings.json</code> file with the following content to include the storage account connection string:</p>
<pre><code class="lang-yaml">{
  <span class="hljs-attr">"IsEncrypted":</span> <span class="hljs-literal">false</span>,
  <span class="hljs-attr">"Values":</span> {
    <span class="hljs-attr">"AzureWebJobsStorage":</span> <span class="hljs-string">"Your_Storage_Connection_String"</span>,
    <span class="hljs-attr">"FUNCTIONS_WORKER_RUNTIME":</span> <span class="hljs-string">"node"</span>,
    <span class="hljs-attr">"StorageConnectionString":</span> <span class="hljs-string">"Your_Storage_Connection_String"</span>
  }
}
</code></pre>
<p>Replace <code>"Your_Storage_Connection_String"</code> with the actual connection string for your Azure Storage account.</p>
<p>This <code>local.settings.json</code> file should be placed in the root directory of your function app. It will be used by the Azure Functions runtime to access the storage account and perform operations such as reading from or writing to blob storage.</p>
<h1 id="heading-azure-devops-pipeline-integration">Azure DevOps Pipeline Integration</h1>
<ol>
<li><p><strong>Trigger and Pool Configuration</strong></p>
<ul>
<li><p><strong>Trigger</strong>: The pipeline is triggered by changes to the <code>main</code> branch, ensuring that any updates to this branch will initiate the build and deployment process.</p>
</li>
<li><p><strong>Pool</strong>: Uses the <code>Default</code> agent pool, which specifies the set of agents that will run the pipeline jobs.</p>
</li>
</ul>
</li>
<li><p><strong>Variables</strong></p>
<ul>
<li><p><strong>azureSubscription</strong>: The Azure Resource Manager connection ID that provides access to Azure resources.</p>
</li>
<li><p><strong>functionAppName</strong>: The name of the Azure Function App where the application will be deployed.</p>
</li>
<li><p><strong>environmentName</strong>: The environment name used for deploying the app, ensuring proper organization and deployment context.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-stages"><strong>Stages</strong></h3>
<ol>
<li><p><strong>Build Stage</strong></p>
<ul>
<li><p><strong>Job</strong>: <code>Build</code></p>
<ul>
<li><p><strong>Install zip utility</strong>: Uses a <code>Bash</code> task to install the <code>zip</code> utility on the build agent. This is necessary for archiving files later in the pipeline.</p>
</li>
<li><p><strong>Install Node.js</strong>: Uses <code>NodeTool@0</code> task to install Node.js version 18.x, ensuring that the correct version is available for building and running the application.</p>
</li>
<li><p><strong>Prepare binaries</strong>: Runs <code>npm</code> commands to install dependencies, build the application, and run tests if they are present. This ensures the application is ready for deployment.</p>
</li>
<li><p><strong>Copy Files to Build Directory</strong>: Copies files from the source folder to the build directory, preparing them for archiving.</p>
</li>
<li><p><strong>Archive files</strong>: Uses <code>ArchiveFiles@2</code> task to create a zip archive of the application files. This archive will be used for deployment.</p>
</li>
<li><p><strong>Upload Artifact</strong>: Uploads the zip archive to the pipeline artifact storage, making it available for use in the deployment stage.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Deploy Stage</strong></p>
<ul>
<li><p><strong>Deployment</strong>: <code>Deploy</code></p>
<ul>
<li><p><strong>Environment</strong>: Uses the <code>environmentName</code> variable to specify the deployment environment.</p>
</li>
<li><p><strong>Strategy</strong>: <code>runOnce</code> deployment strategy is used to deploy the application once the build stage is successful.</p>
</li>
<li><p><strong>Azure Function App Task</strong>:</p>
<ul>
<li><p><strong>connectedServiceNameARM</strong>: Specifies the Azure Resource Manager connection for authentication.</p>
</li>
<li><p><strong>appType</strong>: Indicates that the target is a Function App.</p>
</li>
<li><p><strong>appName</strong>: The name of the Azure Function App where the application will be deployed.</p>
</li>
<li><p><strong>package</strong>: Specifies the path to the zip file containing the application code.</p>
</li>
<li><p><strong>deploymentMethod</strong>: Uses <code>zipDeploy</code> for deploying the application from the zip archive.</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-comment"># Node.js Function App to Linux on Azure</span>
<span class="hljs-comment"># Build a Node.js function app and deploy it to Azure as a Linux function app.</span>
<span class="hljs-comment"># Add steps that analyze code, save build artifacts, deploy, and more:</span>
<span class="hljs-comment"># https://docs.microsoft.com/azure/devops/pipelines/languages/javascript</span>

<span class="hljs-attr">trigger:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
<span class="hljs-attr">pool:</span> 
  <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>

<span class="hljs-attr">variables:</span>

  <span class="hljs-comment"># Azure Resource Manager connection created during pipeline creation</span>
  <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'4ea4d60c-XXXXXXXXXX-4a72deaf74d8'</span>

  <span class="hljs-comment"># Function app name</span>
  <span class="hljs-attr">functionAppName:</span> <span class="hljs-string">'qrcode-devopswithritesh'</span>

  <span class="hljs-comment"># Environment name</span>
  <span class="hljs-attr">environmentName:</span> <span class="hljs-string">'qrcode-devopswithritesh'</span>

<span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Build</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span> <span class="hljs-string">stage</span>
  <span class="hljs-attr">jobs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Build</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span>

    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Bash@3</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Install zip utility'</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">targetType:</span> <span class="hljs-string">'inline'</span>
        <span class="hljs-attr">script:</span> <span class="hljs-string">'sudo apt-get update &amp;&amp; sudo apt-get install -y zip'</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">NodeTool@0</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">versionSpec:</span> <span class="hljs-string">'18.x'</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Install Node.js'</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">script:</span> <span class="hljs-string">|
        cd qrCodeGenerator
        npm install
        npm run build --if-present
        npm run test --if-present
</span>      <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Prepare binaries'</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">CopyFiles@2</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">SourceFolder:</span> <span class="hljs-string">'$(System.DefaultWorkingDirectory)/qrCodeGenerator/GenerateQRCode/'</span>
        <span class="hljs-attr">Contents:</span> <span class="hljs-string">'**'</span>
        <span class="hljs-attr">TargetFolder:</span> <span class="hljs-string">'$(System.DefaultWorkingDirectory)/qrCodeGenerator/'</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Copy Files to Build Directory'</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">ArchiveFiles@2</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Archive files'</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">rootFolderOrFile:</span> <span class="hljs-string">'$(System.DefaultWorkingDirectory)/qrCodeGenerator'</span>
        <span class="hljs-attr">includeRootFolder:</span> <span class="hljs-literal">false</span>
        <span class="hljs-attr">archiveType:</span> <span class="hljs-string">zip</span>
        <span class="hljs-attr">archiveFile:</span> <span class="hljs-string">$(Build.ArtifactStagingDirectory)/qrCodeGenerator/$(Build.BuildId).zip</span>
        <span class="hljs-attr">replaceExistingArchive:</span> <span class="hljs-literal">true</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">upload:</span> <span class="hljs-string">$(Build.ArtifactStagingDirectory)/qrCodeGenerator/$(Build.BuildId).zip</span>
      <span class="hljs-attr">artifact:</span> <span class="hljs-string">drop</span>

<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Deploy</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">Deploy</span> <span class="hljs-string">stage</span>
  <span class="hljs-attr">dependsOn:</span> <span class="hljs-string">Build</span>
  <span class="hljs-attr">condition:</span> <span class="hljs-string">succeeded()</span> <span class="hljs-comment"># This stage will run only if first stage is succeeded</span>
  <span class="hljs-attr">jobs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">deployment:</span> <span class="hljs-string">Deploy</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Deploy</span>
    <span class="hljs-attr">environment:</span> <span class="hljs-string">$(environmentName)</span>
    <span class="hljs-attr">pool:</span> 
     <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>
    <span class="hljs-attr">strategy:</span>
      <span class="hljs-attr">runOnce:</span>
        <span class="hljs-attr">deploy:</span>
          <span class="hljs-attr">steps:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureFunctionApp@2</span>
            <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">connectedServiceNameARM:</span> <span class="hljs-string">'Pay-As-You-Go(4accce4f-XXXXXXXXX-5456d8fa879d)'</span>
              <span class="hljs-attr">appType:</span> <span class="hljs-string">'functionApp'</span>
              <span class="hljs-attr">appName:</span> <span class="hljs-string">'$(functionAppName)'</span>
              <span class="hljs-attr">package:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop/$(Build.BuildID).zip'</span>
              <span class="hljs-attr">deploymentMethod:</span> <span class="hljs-string">'zipDeploy'</span>
</code></pre>
<p>This pipeline automates the process of building and deploying a Node.js Function App, highlighting the efficiency and consistency provided by using Azure DevOps Pipelines. Ensure to adjust the values of variables such as <code>azureSubscription</code> and <code>functionAppName</code> to match your specific Azure environment and resources.</p>
<p>Once the build is successful, you can test your function app directly from the Azure portal. Navigate to your Function App and select the function you want to test. Click on the "Run" button and provide the required query parameters in the provided input fields.</p>
<p>For example, if your function requires a <code>url</code> parameter, you should enter the appropriate URL in the query parameter field.</p>
<p>After initiating the test, you should receive a response with a status code of <code>200</code>, indicating that the function executed successfully and returned the expected output. This confirms that your function app is working as intended.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725779436460/ef8a645a-0861-4768-9bf1-154204519dfb.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-reference">Reference</h1>
<p><strong>GitHub:</strong> <a target="_blank" href="https://github.com/ritesh-kumar-nayak/azure-qr-code">https://github.com/ritesh-kumar-nayak/azure-qr-code</a></p>
<p>You can get the pipeline code in the given repository as well.</p>
]]></content:encoded></item><item><title><![CDATA[Managing Containers with Azure DevOps]]></title><description><![CDATA[What's inside?

Understanding Virtual Machines vs Containers

Challenges with non-containerized applications

Containerize a Webapp

What are Azure Container Instances(ACI)

Azure DevOps CI/CD pipeline to deploy to ACI


VMs vs Containers
Virtual Mac...]]></description><link>https://www.devopswithritesh.in/managing-containers-with-azure-devops</link><guid isPermaLink="true">https://www.devopswithritesh.in/managing-containers-with-azure-devops</guid><category><![CDATA[#AzureDevOps]]></category><category><![CDATA[Azure]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[ACR]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Azure Pipelines]]></category><category><![CDATA[containerization]]></category><category><![CDATA[Containerization vs. Virtualization]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Fri, 30 Aug 2024 10:35:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1725013499925/feaa777c-2fa5-4480-9d17-07358bba2f83.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-whats-inside">What's inside?</h2>
<ul>
<li><p>Understanding Virtual Machines vs Containers</p>
</li>
<li><p>Challenges with non-containerized applications</p>
</li>
<li><p>Containerize a Webapp</p>
</li>
<li><p>What are Azure Container Instances(ACI)</p>
</li>
<li><p>Azure DevOps CI/CD pipeline to deploy to ACI</p>
</li>
</ul>
<h1 id="heading-vms-vs-containers">VMs vs Containers</h1>
<h3 id="heading-virtual-machines-vms">Virtual Machines - VMs</h3>
<p><strong>Architecture</strong>: VMs are hosted on a hypervisor, either Type 1 or Type 2, which resides between the hardware and the operating systems. Each VM contains its own complete operating system in addition to the application, and more importantly, the necessary binaries and libraries.</p>
<p><strong>Isolation</strong>: VMs offer strong isolation because an OS of its own is present on each VM along with its resources. It also has the potential to incorporate higher, stronger overhead isolation.</p>
<p><strong>Resource Utilization</strong>: VMs are pretty resource-intensive, as an instance of a full OS has to run for each VM. This can actually consume much more in terms of resources and can also have a long starting time.</p>
<p><strong>Flexibility</strong>: VMs are excellent for running applications requiring full OS environments, really complex applications, or legacy software.</p>
<p><strong>Management</strong>: VMs are commonly managed like a full OS with updates, patches, and configurations; therefore, management could be cumbersome.</p>
<h3 id="heading-containers"><strong>Containers</strong></h3>
<ol>
<li><p><strong>Architecture</strong>: For instance, containers run on a shared OS kernel but are isolated from each other. They package an application and its dependencies into a single unit, which is more lightweight than a VM.</p>
</li>
<li><p><strong>Isolation</strong>: Containers offer a lighter and faster way of providing isolation: they can offer process-level isolation, not whole-OS isolation. This may be less secure than VMs.</p>
</li>
<li><p><strong>Resource Utilization</strong>: Containers do not require a full OS and share a kernel with the host, which generally leads to much more resource effectiveness, decreasing overhead and thus improving rise times.</p>
</li>
<li><p><strong>Flexibility</strong>: Containers are particularly amenable to microservices architectures, continuous integration/continuous deployment (CI/CD) pipelines, and high scales of deployment.</p>
</li>
<li><p><strong>Management</strong>: Container management holds only orchestration tools on the container plane, such as Kubernetes, and wrangling with container images—this can be easier compared to full VMs.</p>
</li>
</ol>
<h3 id="heading-when-to-use-what"><strong>When to Use What</strong></h3>
<ul>
<li><p><strong>VMs</strong>: Use VMs when you need to run applications that require a full OS or when working with legacy systems. They are also useful when strong isolation is necessary.</p>
</li>
<li><p><strong>Containers</strong>: Use containers for developing and deploying microservices, when you need rapid scaling, or when you want to maximize resource efficiency.</p>
</li>
</ul>
<h1 id="heading-architecture">Architecture</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724851696937/0a615ad6-29bc-46f7-92bd-565d0afe960f.png" alt class="image--center mx-auto" /></p>
<p>Here we'll first dockerize the application by creating the dockerfile followed by a container image out of it.</p>
<p>Then we'll store the image in <strong>Azure Container Registry</strong> which is a private registry for storing your container images, and ACI can pull these images to create and run containers. This integration allows you to easily manage your container lifecycle within the Azure ecosystem.</p>
<h1 id="heading-azure-devops-setup">Azure DevOps Setup</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724595884768/1af39beb-636e-47d6-a814-8a6b01d92336.png" alt class="image--center mx-auto" /></p>
<p>In our latest project, "Containerization-CI-CD," we've successfully set up the repository and migrated the codebase to Azure Repos, laying the foundation for streamlined container management and continuous integration and delivery.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724596154233/3f2e1d18-1b16-47d0-8f62-0df4e7329105.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-dockerfile">Dockerfile</h1>
<p>Before proceeding to pipeline creation for build and release we need to have the Dockerfile ready to be used to build the Docker image and a few other pre-requisites like <strong>Azure Container Registry(ACR)</strong> and <strong>Azure Container Instance(ACI)</strong></p>
<pre><code class="lang-yaml"><span class="hljs-string">FROM</span> <span class="hljs-string">node:18-alpine</span> <span class="hljs-string">AS</span> <span class="hljs-string">Installer</span>
<span class="hljs-string">WORKDIR</span> <span class="hljs-string">/app</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">package.json</span> <span class="hljs-string">./**</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">npm</span> <span class="hljs-string">install</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">.</span> <span class="hljs-string">.</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">build</span>
<span class="hljs-string">FROM</span> <span class="hljs-string">nginx:latest</span> <span class="hljs-string">AS</span> <span class="hljs-string">Deployer</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">--from=Installer</span> <span class="hljs-string">/app/build</span> <span class="hljs-string">usr/share/nginx/html</span>
</code></pre>
<h1 id="heading-azure-container-registry">Azure Container Registry</h1>
<p>Azure Container Registry (ACR) is a fully managed, private container registry service provided by Microsoft Azure. It allows storing and managing container images and other related artifacts like Helm charts and OCI artifacts in a secure and scalable environment.</p>
<p>As of now, the Dockerfile is ready. Once the image is built from the Dockerfile, it will be stored in the Azure Container Registry (ACR).</p>
<h2 id="heading-acr-creation">ACR Creation</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724854080452/b9123597-1779-4e0b-85b6-fad967505859.png" alt class="image--center mx-auto" /></p>
<p>Search for Container Registries and click on the container registries as highlighted followed by Clicking on Create. Fill up the details and make sure the <strong>"Registry Name"</strong> has to be unique across the globe as the registry will be accessed with the givenName.azurecr.io i.e. <strong>devopswithritesh.azurecr.io</strong> in our case.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724898141078/19b2a36b-7684-416b-87bb-041d31d2b1be.png" alt class="image--center mx-auto" /></p>
<p>We have chosen the <strong>Standard Pricing Plan</strong> for this demo hence, private access is not available. And now our registry has been created successfully</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724922687453/3af53930-1013-4d7c-ba00-dab4c42f89d6.png" alt class="image--center mx-auto" /></p>
<p>Once the registry is created, we need to capture the username and password, which will be required when pushing the image to ACR. Under Settings, find the access key and enable the <strong>Admin User</strong> checkbox to view the password. You can then upload this information to Azure KeyVault or store it securely elsewhere.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724923181193/94800c7f-634d-40a7-a4c9-ab7fe7176006.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-pipeline">Pipeline</h1>
<p>With our Azure Container Registry (ACR) set up, we can now proceed to create an Azure DevOps Pipeline to automate the build and publishing of container images. This pipeline will streamline the process, ensuring that every update is efficiently built and securely stored in our registry, ready for deployment.</p>
<h3 id="heading-azure-repo-integration">Azure Repo Integration</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724925791439/6645c43f-c9a0-480d-a528-46e83efb02b6.png" alt class="image--center mx-auto" /></p>
<p>After selecting Azure Repos Git as the source, you'll be presented with a list of available Azure Repos that can be integrated into the pipeline. From this list, you can choose the specific repository you'd like to connect, allowing seamless integration for your pipeline setup.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724926864150/d1990312-b645-45f0-8e70-2247c9e7000c.png" alt class="image--center mx-auto" /></p>
<p>At the configuration step, you can select the '<strong>Docker Build and Push to Container Registry</strong>' option. This choice provides a pre-built template tailored for Docker image builds and pushes to your Azure Container Registry. The template is fully customizable, allowing you to tailor the pipeline to our specific needs as well.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724927050101/33a647df-7eac-47b5-8f34-42a3d198ab22.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-container-registry-integration">Container Registry Integration</h3>
<p>After selecting the 'Docker Build and Publish' option, you'll be prompted to choose your Azure subscription and authorize access. Once authorization is complete, you can then select the Container Registry, specify the image name, and define the Dockerfile location as shown below, and finally click on Validate and Configure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724927617891/677806e7-41db-4631-9d73-f7d34d15ad13.png" alt class="image--center mx-auto" /></p>
<p>After clicking on 'Validate and Configure,' a template pipeline code is automatically generated. However, we've made several modifications to the pipeline code to better suit our needs. The updated pipeline code is now as follows</p>
<h3 id="heading-pipeline-code">Pipeline Code</h3>
<pre><code class="lang-yaml"><span class="hljs-comment"># Docker</span>
<span class="hljs-comment"># Build and push an image to Azure Container Registry</span>
<span class="hljs-comment"># https://docs.microsoft.com/azure/devops/pipelines/languages/docker</span>

<span class="hljs-attr">trigger:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
<span class="hljs-attr">pool:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>

<span class="hljs-attr">resources:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">repo:</span> <span class="hljs-string">self</span>

<span class="hljs-attr">variables:</span>
  <span class="hljs-comment"># Container registry service connection established during pipeline creation</span>
  <span class="hljs-attr">dockerRegistryServiceConnection:</span> <span class="hljs-string">'XXXXXXXX-XXXX-XXXX-XXXXXX'</span>
  <span class="hljs-attr">imageRepository:</span> <span class="hljs-string">'todoapp'</span>
  <span class="hljs-attr">containerRegistry:</span> <span class="hljs-string">'devopswithritesh.azurecr.io'</span>
  <span class="hljs-attr">dockerfilePath:</span> <span class="hljs-string">'$(Build.SourcesDirectory)/Dockerfile'</span>
  <span class="hljs-attr">tag:</span> <span class="hljs-string">'$(Build.BuildId)'</span>

<span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Build</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span> <span class="hljs-string">and</span> <span class="hljs-string">push</span> <span class="hljs-string">stage</span>
  <span class="hljs-attr">jobs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Build</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span>
    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureCLI@2</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'Pay-As-You-Go(XXXXXXXXX-XXXXXXX)'</span>
        <span class="hljs-attr">scriptType:</span> <span class="hljs-string">'bash'</span>
        <span class="hljs-attr">scriptLocation:</span> <span class="hljs-string">'inlineScript'</span>
        <span class="hljs-attr">inlineScript:</span> <span class="hljs-string">'az acr login --name=$(containerRegistry)'</span>
    <span class="hljs-comment"># Task to build and push the image to ACR</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Docker@2</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">Build</span> <span class="hljs-string">and</span> <span class="hljs-string">push</span> <span class="hljs-string">an</span> <span class="hljs-string">image</span> <span class="hljs-string">to</span> <span class="hljs-string">container</span> <span class="hljs-string">registry</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">command:</span> <span class="hljs-string">buildAndPush</span>
        <span class="hljs-attr">repository:</span> <span class="hljs-string">$(imageRepository)</span>
        <span class="hljs-attr">dockerfile:</span> <span class="hljs-string">$(dockerfilePath)</span>
        <span class="hljs-attr">containerRegistry:</span> <span class="hljs-string">$(dockerRegistryServiceConnection)</span>
        <span class="hljs-attr">tags:</span> <span class="hljs-string">|
          $(tag)
</span>
    <span class="hljs-comment">#Task to create Container Instance(ACI)</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureCLI@2</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'Pay-As-You-Go(XXXXX-XXXXXXXXX-XXXXXX)'</span>
        <span class="hljs-attr">scriptType:</span> <span class="hljs-string">'bash'</span>
        <span class="hljs-attr">scriptLocation:</span> <span class="hljs-string">'inlineScript'</span>
        <span class="hljs-attr">inlineScript:</span> <span class="hljs-string">|
          az container create \
          --name todo_container \
          --resourse-group Contaiinerization_with_Azdo \
          --image $(containerRegistry)/$(imageRepository):$(tag) \
          --registry-login-server $(containerRegistry) \
          --registry-username devopswithritesh  \
          --registry-password XXXXXXXX \
          --dns-name-label aci-devopswithritesh</span>
</code></pre>
<h1 id="heading-azure-container-instanceaci">Azure Container Instance(ACI)</h1>
<p>Azure Container Instances (ACI) is a fully managed service provided by Microsoft Azure that allows users to run containers directly on the Azure cloud without the need to manage any underlying virtual machines or orchestrators like Kubernetes. ACI is ideal for scenarios where you need to quickly deploy and manage containers without the overhead of managing infrastructure.</p>
<h3 id="heading-key-features-of-azure-container-instances">Key Features of Azure Container Instances:</h3>
<ol>
<li><p><strong>Simplicity and Speed</strong>: ACI offers a straightforward way to deploy containers. You can start running containers within seconds, making it an excellent choice for tasks that require fast and temporary computing resources.</p>
</li>
<li><p><strong>No Infrastructure Management</strong>: ACI abstracts the underlying infrastructure, allowing you to focus solely on your containers. There's no need to manage or scale virtual machines, patch operating systems, or configure orchestrators.</p>
</li>
<li><p><strong>Scalability</strong>: ACI enables easy scaling of containerized applications. You can adjust the CPU and memory resources allocated to your containers as needed, ensuring that your application can handle varying workloads.</p>
</li>
<li><p><strong>Cost-Effective</strong>: ACI operates on a pay-as-you-go pricing model, meaning you only pay for the compute resources your containers use. This makes it cost-effective, especially for short-lived or bursty workloads.</p>
</li>
<li><p><strong>Seamless Integration with Azure Services</strong>: ACI integrates well with other Azure services, such as Azure Virtual Network, enabling you to deploy containers in a secure and isolated environment. It also integrates with Azure DevOps, allowing for smooth CI/CD pipeline setups.</p>
</li>
<li><p><strong>Event-Driven Containers</strong>: ACI can be used for event-driven scenarios, such as processing tasks from an Azure Event Grid, Azure Service Bus, or Azure Functions, allowing for a dynamic response to changes in your environment.</p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-comment">#Task to create Container Instance(ACI)</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureCLI@2</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'Pay-As-You-Go(XXXXX-XXXXXXXXX-XXXXXX)'</span>
        <span class="hljs-attr">scriptType:</span> <span class="hljs-string">'bash'</span>
        <span class="hljs-attr">scriptLocation:</span> <span class="hljs-string">'inlineScript'</span>
        <span class="hljs-attr">inlineScript:</span> <span class="hljs-string">|
          az container create \
          --name todo-container \
          --resourse-group Contaiinerization_with_Azdo \
          --image $(containerRegistry)/$(imageRepository):$(tag) \
          --registry-login-server $(containerRegistry) \
          --registry-username devopswithritesh  \
          --registry-password XXXXXXXX \
          --dns-name-label aci-devopswithritesh</span>
</code></pre>
<p>This task creates an Azure Container Instance (ACI) named <strong>"todo-container"</strong> using the specified Docker image. The container is deployed within the resource group <strong>"Containerization_with_Azdo"</strong>. The application, packaged within the Docker image, will run in the container and be exposed on the port defined in the Docker file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725008408588/66101630-38f6-4e62-8775-eeb65183a4e6.png" alt class="image--center mx-auto" /></p>
<p>Once the container is successfully created, the application will be accessible via the Fully Qualified Domain Name (FQDN) associated with the container. This FQDN can be used to directly access the application from any browser or HTTP client.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725008517530/bc433d89-612b-4dc3-9ede-38cf0679a4ec.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>Containerization with Azure DevOps offers a powerful and streamlined approach to modern application deployment. By leveraging Azure Container Instances (ACI), we can efficiently deploy, manage, and scale containerized applications with ease. The integration of Azure DevOps pipelines automates the entire process, from building and pushing Docker images to creating and configuring container instances. This not only enhances deployment speed and reliability but also ensures that applications are readily accessible via fully qualified domain names (FQDNs). As organizations continue to embrace cloud-native technologies, mastering containerization with Azure DevOps becomes an essential skill for delivering robust, scalable, and efficient applications in today's fast-paced digital landscape.</p>
]]></content:encoded></item><item><title><![CDATA[Azure DevOps and Terraform Integration]]></title><description><![CDATA[Integrating Terraform with Azure DevOps allows organizations to harness the power of Infrastructure as Code (IaC) for streamlined, automated deployments in the cloud. By leveraging Terraform's capabilities within Azure DevOps pipelines, teams can man...]]></description><link>https://www.devopswithritesh.in/azure-devops-and-terraform-integration</link><guid isPermaLink="true">https://www.devopswithritesh.in/azure-devops-and-terraform-integration</guid><category><![CDATA[azure-devops]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Terraform terraform-cloud Devops Devops articles DevSecOps Cloud AWS GCP Azure #Terraform #AWS #InfrastructureAsCode #Provisioning #Automation #CloudComputing Infrastructure as code terraform-state Technical writing Blogging Infrastructure management]]></category><category><![CDATA[#IaC]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Mon, 19 Aug 2024 18:45:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723401687070/efe0823b-e66d-484a-8608-46ae5a4fde6c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Integrating Terraform with Azure DevOps allows organizations to harness the power of Infrastructure as Code (IaC) for streamlined, automated deployments in the cloud. By leveraging Terraform's capabilities within Azure DevOps pipelines, teams can manage infrastructure efficiently, reduce manual errors, and maintain consistent environments across development, staging, and production. This synergy between Terraform and Azure DevOps enables seamless provisioning, management, and scaling of resources, ensuring that infrastructure changes are deployed with the same rigor and reliability as application code, ultimately driving operational excellence and innovation.</p>
<p>In this article, we will focus on integrating Terraform workflows with the Azure DevOps pipeline to enhance infrastructure automation. If you want to learn Terraform from the basics, please refer to <a target="_blank" href="https://www.devopswithritesh.in/complete-terraform-fundamentals">my blog on Terraform fundamentals</a>.</p>
<p>The Terraform workflow typically follows a series of steps designed to manage infrastructure as code effectively. Here's an overview of the key stages:</p>
<ol>
<li><p><strong>Write</strong>:<br /> In this initial stage, you define your infrastructure using HashiCorp Configuration Language (HCL) in Terraform files. These files describe the resources and configurations needed for your cloud environment, including virtual machines, networks, storage, and more. The infrastructure code is stored in version control systems like Git to ensure collaboration and tracking of changes.</p>
</li>
<li><p><strong>Initialize (terraform init)</strong>:<br /> Before applying your configurations, you need to initialize your Terraform working directory. This step downloads the necessary provider plugins (e.g., for Azure and AWS) and sets up the environment for Terraform to run. This is typically the first command executed in a Terraform workflow.</p>
</li>
<li><p><strong>Plan (terraform plan)</strong>:<br /> The <code>terraform plan</code> command generates an execution plan, detailing the actions Terraform will take to reach the desired state of your infrastructure. It shows what resources will be created, modified, or destroyed without making any actual changes. This step is crucial for reviewing and validating the changes before applying them.</p>
</li>
<li><p><strong>Apply (terraform apply)</strong>:<br /> After reviewing the plan, the <code>terraform apply</code> the command is used to execute the changes. Terraform interacts with the cloud provider's API to create, update, or delete resources as defined in your configuration files. The applied changes bring your infrastructure to the desired state.</p>
</li>
<li><p><strong>Manage and Evolve</strong>:<br /> Once the infrastructure is deployed, you can continue to manage and evolve it by modifying the Terraform configurations. Changes are tracked through version control, and the workflow cycles through planning and applying updates. Terraform maintains a state file that records the current state of your infrastructure, enabling it to track changes and ensure consistency.</p>
</li>
<li><p><strong>Destroy (terraform destroy)</strong>:<br /> When resources are no longer needed, the <code>terraform destroy</code> command can be used to tear down the entire infrastructure or specific resources. This command helps clean up and manage costs by removing unused resources.</p>
</li>
</ol>
<p>The Terraform workflow can be automated using continuous integration/continuous deployment (CI/CD) pipelines in platforms like Azure DevOps. This automation ensures that infrastructure changes are consistently and reliably deployed, reducing the potential for human error.</p>
<h1 id="heading-azure-cli-local-setup">Azure CLI Local Setup</h1>
<p>Terraform supports multiple ways of authentication to Azure which are:</p>
<ul>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli">Authenticating to Azure using the Azure CLI</a></p>
</li>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli">Authenticating to Azure using Managed Se</a><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/managed_service_identity">rvic</a><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli">e Identity</a></p>
</li>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli">Authenticating to Azure using</a> <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/managed_service_identity">a Service Prin</a><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli">cipal and a Client Certificate</a></p>
</li>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli">Authentic</a><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/managed_service_identity">ating t</a><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_certificate">o Az</a><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/managed_service_identity">ure using a Service Prin</a><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli">cipal and a Client Secret</a></p>
</li>
<li><p><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli">Authenticating</a> <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_certificate">to Azure u</a><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/managed_service_identity">sing OpenID Connect</a></p>
</li>
</ul>
<p>Our demo will use <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli">Authentic</a><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/managed_service_identity">ating t</a><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_certificate">o Az</a><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/managed_service_identity">ure using a Service Prin</a><a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli">cipal and a Client Secret</a>. You can get more information about the authentication process on <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs">azurerm terraform documentation</a>.</p>
<p>Integrating Terraform with Azure CLI is crucial because it simplifies and secures the process of managing Azure resources. By using Azure CLI for authentication, Terraform can seamlessly <strong><em>interact with your Azure environment</em></strong> without needing to manage separate credentials, reducing the risk of exposure. This integration enables Terraform to leverage existing Azure CLI configurations, such as active subscriptions and managed identities, streamlining the deployment process and ensuring consistent, secure access to Azure resources.</p>
<h2 id="heading-1-install-azure-cli">1. <strong>Install Azure CLI</strong></h2>
<ul>
<li><p>Ensure that Azure CLI is installed on your machine. You can install it by following <a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli">Azure CLI Installation Guide</a>.</p>
</li>
<li><p>Verify the installation by running:</p>
<pre><code class="lang-bash">  az --version
</code></pre>
</li>
</ul>
<h2 id="heading-2-install-terraform">2. <strong>Install Terraform</strong></h2>
<ul>
<li><p>Install Terraform on your machine. You can download it from the official Terraform website.</p>
</li>
<li><p>Verify the installation by running:</p>
<pre><code class="lang-bash">  terraform --version
</code></pre>
</li>
</ul>
<h2 id="heading-3-authenticate-azure-cli">3. <strong>Authenticate Azure CLI</strong></h2>
<ul>
<li><p>Log in to Azure using Azure CLI:</p>
<pre><code class="lang-yaml">  <span class="hljs-string">az</span> <span class="hljs-string">login</span>
</code></pre>
<p>  After executing the command, a new browser window will open, directing you to the Azure sign-in page. On the sign-in page, select the Azure account you want to use. If you have multiple accounts, you can choose the appropriate one.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723973666050/fd3e7312-a76f-4ef8-8854-53cb9302a353.png" alt class="image--center mx-auto" /></p>
<p>  Once you are logged in successfully it will display the account details on your Local CLI as below</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723973892016/27c6ba10-8403-4344-b248-0e6ff7e914e5.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>If you work with multiple subscriptions, set the active subscription:</p>
<pre><code class="lang-yaml">  <span class="hljs-string">az</span> <span class="hljs-string">account</span> <span class="hljs-string">set</span> <span class="hljs-string">--subscription</span> <span class="hljs-string">"your-subscription-id"</span>
</code></pre>
</li>
</ul>
<h2 id="heading-4-create-service-principal">4. <strong>Create</strong> Service Principal</h2>
<p>We can now create the Service Principal which will have permission to manage resources in the specified Subscription using the following command:</p>
<pre><code class="lang-bash">az ad sp create-for-rbac --role=<span class="hljs-string">"Contributor"</span> --scopes=<span class="hljs-string">"/subscriptions/0000000-9342-4b5d-bf6b-5456d8fa879d"</span>
</code></pre>
<p>You will be getting the output having below 4 values</p>
<pre><code class="lang-markdown">{
  "appId": "0000000-20be-41c8-bad0-60299ed476ae",
  "displayName": "azure-cli-2024-08-18-10-13-02",
  "password": "XXXXXXXXXXXXX",
  "tenant": "bbb-xxxx-zzzzz"
}
</code></pre>
<p>These values map to the Terraform variables like so:</p>
<ul>
<li><p><code>appId</code> is the <code>client_id</code> defined above.</p>
</li>
<li><p><code>password</code> is the <code>client_secret</code> defined above.</p>
</li>
<li><p><code>tenant</code> is the <code>tenant_id</code> defined above.</p>
</li>
</ul>
<h2 id="heading-5-login-using-service-principle">5. <strong>Login using Service Principle</strong></h2>
<p>Now the service principal has the <strong>contributor role</strong> assigned hence we need to log in again using the created service principal</p>
<pre><code class="lang-bash">az login --service-principal -u CLIENT_ID -p CLIENT_SECRET --tenant TENANT_ID
</code></pre>
<p>Once execute the above command with appropriate values, you will get an output as below where you will be logged in via the service principal</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723976933939/995de9f3-a219-4d70-bf48-d2fc89d5bfae.png" alt class="image--center mx-auto" /></p>
<p>With the service principal credentials, Terraform can now communicate with Azure to create, modify, and destroy infrastructure as defined in your Terraform scripts.</p>
<h2 id="heading-6-configuring-the-service-principal-in-terraform"><strong>6. Configuring the Service Principal in Terraform</strong></h2>
<p>As we've obtained the credentials for this Service Principal - it's possible to configure them in a few different ways.</p>
<p>When storing the credentials as Environment Variables, for example:</p>
<pre><code class="lang-shell"># sh
export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
export ARM_CLIENT_SECRET="12345678-0000-0000-0000-000000000000"
export ARM_TENANT_ID="10000000-0000-0000-0000-000000000000"
export ARM_SUBSCRIPTION_ID="20000000-0000-0000-0000-000000000000"
</code></pre>
<p>These values can also be hard-coded under the provider block in the Terraform manifest. However, it is not recommended to hard-code secret credentials as plain text. Instead, we can use environment variables as mentioned above or store them in Azure Key Vault.</p>
<h3 id="heading-outcome"><strong>Outcome</strong></h3>
<p>From now on, whenever you run <code>terraform apply</code> or <code>terraform destroy</code>, Terraform will authenticate with Azure using this service principal. This setup ensures secure and automated infrastructure management, aligned with best practices for cloud authentication.</p>
<h1 id="heading-create-a-terraform-configuration"><strong>Create a Terraform Configuration</strong></h1>
<h2 id="heading-providertf">provider.tf</h2>
<p>Start by defining your infrastructure in a <code>.tf</code> file:</p>
<ul>
<li><pre><code class="lang-bash">      <span class="hljs-comment"># We strongly recommend using the required_providers block to set the</span>
      <span class="hljs-comment"># Azure Provider source and version being used</span>
      terraform {
        required_providers {
          azurerm = {
            <span class="hljs-built_in">source</span>  = <span class="hljs-string">"hashicorp/azurerm"</span>
            version = <span class="hljs-string">"=3.0.0"</span>
          }
        }
      }

      <span class="hljs-comment"># Configure the Microsoft Azure Provider</span>
      provider <span class="hljs-string">"azurerm"</span> {
        <span class="hljs-comment"># here you can hard code your azurerm credentials such as tenant_id, subscription_id, client_id, client_secret etc.. however, it is not at all recomended to hardcode secret credentials as plain text. We can use environment variables or store them in azure key-vault.</span>
        features {}
      }
</code></pre>
</li>
<li><p>Initialize Terraform:</p>
<pre><code class="lang-yaml">  <span class="hljs-string">terraform</span> <span class="hljs-string">init</span>
</code></pre>
</li>
<li><p>Apply the configuration:</p>
<pre><code class="lang-yaml">  <span class="hljs-string">terraform</span> <span class="hljs-string">apply</span>
</code></pre>
</li>
</ul>
<h2 id="heading-best-practices"><strong>Best Practices</strong></h2>
<ul>
<li><p><strong>Use Managed Identity</strong>: If running Terraform from within Azure (e.g., Azure DevOps), consider using a Managed Identity to handle authentication automatically without needing service principals.</p>
</li>
<li><p><strong>State Management</strong>: Use remote state management (e.g., Azure Storage) to securely store your Terraform state files.</p>
</li>
<li><p><strong>Environment Variables</strong>: Terraform can also use Azure credentials stored in environment variables (<code>ARM_CLIENT_ID</code>, <code>ARM_CLIENT_SECRET</code>, <code>ARM_TENANT_ID</code>, <code>ARM_SUBSCRIPTION_ID</code>), which are useful in CI/CD pipelines.</p>
</li>
</ul>
<p><strong><em>NB: You can access the terraform manifests created as part of this article in the below repository</em></strong></p>
<p>Click here to access <a target="_blank" href="https://github.com/ritesh-kumar-nayak/azuredevops-integration-terraform">Terraform Manifests</a></p>
<h1 id="heading-azure-devops-integration">Azure DevOps Integration</h1>
<p>So far, all steps in our infrastructure provisioning process have been performed locally using the Azure CLI and a local Terraform installation. While this approach is effective for initial testing and development, fully automating the process is crucial for consistent, repeatable, and scalable infrastructure deployments.</p>
<p>To achieve full automation, we will leverage <strong>Azure DevOps</strong> pipelines to handle all Terraform operations, including initialization, planning, application, and destruction of infrastructure. This approach ensures that infrastructure provisioning is integrated into our CI/CD processes, providing version control, automated testing, and streamlined deployment.</p>
<h2 id="heading-project-creation-and-setup">Project Creation and setup</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724064357913/d46cc52e-0d79-4b44-adbe-470f1c2c0a95.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-integrating-github-with-azure-devops-for-pipeline-automation"><strong>Integrating GitHub with Azure DevOps for Pipeline Automation</strong></h3>
<p>To streamline our infrastructure provisioning process, we’ve created a new Azure DevOps project. In this project, we will integrate our Terraform code hosted on GitHub and proceed with creating an automated pipeline.</p>
<h3 id="heading-steps-to-integrate-github-and-set-up-the-pipeline"><strong>Steps to Integrate GitHub and Set Up the Pipeline:</strong></h3>
<ol>
<li><p><strong>Access Project Settings:</strong></p>
<ul>
<li><p>Navigate to the newly created Azure DevOps project.</p>
</li>
<li><p>In the project, click on <strong>Project settings</strong> located in the bottom-left corner of the Azure DevOps interface.</p>
</li>
</ul>
</li>
<li><p><strong>Configure GitHub Connection:</strong></p>
<ul>
<li><p>Under <strong>Pipelines</strong>, select <strong>GitHub connections</strong>.</p>
</li>
<li><p>You should see your GitHub repository already listed since it's been added beforehand.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724064622695/b548d5dd-cf4d-4161-a3c7-10432aeaf55a.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Connect to the GitHub Repository:</strong></p>
<ul>
<li><p>If the repository is not already connected, click <strong>Add Connection</strong>, then select your GitHub repository from the list.</p>
</li>
<li><p>Authenticate with GitHub if prompted, and grant Azure DevOps the necessary permissions to access your repository.</p>
</li>
</ul>
</li>
<li><p><strong>Set Up the Pipeline:</strong></p>
<ul>
<li><p>Now that your GitHub repository is connected, go back to the <strong>Pipelines</strong> section in Azure DevOps.</p>
</li>
<li><p>Click on <strong>Pipelines</strong> &gt; <strong>New pipeline</strong>.</p>
</li>
<li><p>Select <strong>GitHub</strong> as the repository source.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724064799505/22398d65-83f8-4be3-8e86-12c905933865.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Choose the appropriate repository where your Terraform code is stored.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724064729232/291673cb-7949-4825-b658-8af50a93bd95.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Follow the prompts to set up your pipeline, starting with a basic YAML pipeline or importing an existing one.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724064897450/52c1d13b-7a2a-4536-a1c5-b4812dd637c6.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Configure the Pipeline for Terraform Operations:</strong></p>
<ul>
<li><p>Install Terraform Extention</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724065137197/98b98e8e-9e05-4327-9525-5c86f9a6401c.png" alt class="image--center mx-auto" /></p>
<p>  At the <strong>Organization setting</strong> click on the extension and then browse the extension. Search for Terraform and install below two extensions below following the further prompts:</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724065268676/a06accd4-d1b0-484b-b21a-2a421fc929c6.png" alt /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724065416451/3d7821ff-49a0-43a4-b60f-b7609a26e295.png" alt class="image--center mx-auto" /></p>
<p>  Once installed you will be able to see the below assistance while writing the pipeline code</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724065622811/821d5c05-e28e-40a7-9f96-ccc597008aff.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>In the pipeline configuration, define the steps for Terraform initialization, planning, and applying.</p>
</li>
<li><p>Ensure the pipeline includes the correct service connections and variables required for Terraform to authenticate with Azure and manage resources.</p>
</li>
</ul>
</li>
</ol>
<h1 id="heading-pipeline-creation">Pipeline Creation</h1>
<h2 id="heading-build-pipeline">Build Pipeline</h2>
<p>In our Terraform build pipeline, the primary objective is to automate the essential steps of infrastructure provisioning, ensuring consistency, compliance, and repeatability. The pipeline is designed to carry out key Terraform operations, leading up to the creation and storage of the <code>tfstate</code> file as an artifact. This artifact will then be utilized in the release pipeline for further stages of infrastructure deployment.</p>
<h3 id="heading-pipeline-stages"><strong>Pipeline Stages</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724067092037/e9663d15-74b3-41e0-841f-01d15b58467d.png" alt /></p>
<ol>
<li><p><strong>Terraform Initialization (</strong><code>terraform init</code>):</p>
<ul>
<li><p>The first stage in our pipeline is initializing Terraform. This step configures the backend and prepares the environment for Terraform operations. It ensures that Terraform has the necessary plugins and access to the remote state file.</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">trigger:</span>
   <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
  <span class="hljs-attr">pool:</span> 
    <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>

  <span class="hljs-attr">stages:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Terraform</span>
      <span class="hljs-attr">jobs:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Build</span>
          <span class="hljs-attr">steps:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">TerraformTaskV4@4</span>
            <span class="hljs-attr">displayName:</span> <span class="hljs-string">Terraform</span> <span class="hljs-string">Init</span>
            <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">provider:</span> <span class="hljs-string">'azurerm'</span>
              <span class="hljs-attr">command:</span> <span class="hljs-string">'init'</span>
              <span class="hljs-attr">backendServiceArm:</span> <span class="hljs-string">'Pay-As-You-Go(4accce4f-9342-4b5d-bf6b-5456d8fa879d)'</span>
              <span class="hljs-attr">backendAzureRmResourceGroupName:</span> <span class="hljs-string">'storage-rg'</span>
              <span class="hljs-attr">backendAzureRmStorageAccountName:</span> <span class="hljs-string">'storetfaccritesh'</span>
              <span class="hljs-attr">backendAzureRmContainerName:</span> <span class="hljs-string">'statefilestore'</span>
              <span class="hljs-attr">backendAzureRmKey:</span> <span class="hljs-string">'prod.terraform.tfstate'</span>
</code></pre>
</li>
</ul>
</li>
<li><p><strong>Terraform Validation (</strong><code>terraform validate</code>):</p>
<ul>
<li><p>The pipeline then validates the Terraform configuration files to ensure they are syntactically correct and consistent with the defined standards. This step is crucial to catch any errors before they propagate further in the pipeline.</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">TerraformTaskV4@4</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Terraform</span> <span class="hljs-string">Validate</span>
    <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">provider:</span> <span class="hljs-string">'azurerm'</span>
              <span class="hljs-attr">command:</span> <span class="hljs-string">'validate'</span>
</code></pre>
</li>
</ul>
</li>
<li><p><strong>Terraform Formatting (</strong><code>terraform fmt</code>):</p>
<ul>
<li><p>Formatting is an important practice to maintain a consistent code style across the team. This step automatically formats the Terraform configuration files according to the standard convention, improving readability and collaboration.</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">TerraformTaskV4@4</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Terraform</span> <span class="hljs-string">Format</span>
    <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">provider:</span> <span class="hljs-string">'azurerm'</span>
              <span class="hljs-attr">command:</span> <span class="hljs-string">'custom'</span>
              <span class="hljs-attr">outputTo:</span> <span class="hljs-string">'console'</span>
              <span class="hljs-attr">customCommand:</span> <span class="hljs-string">'fmt'</span>
              <span class="hljs-attr">environmentServiceNameAzureRM:</span> <span class="hljs-string">'Pay-As-You-Go(4accce4f-9342-4b5d-bf6b-5456d8fa879d)'</span>
</code></pre>
</li>
</ul>
</li>
<li><p><strong>Terraform Plan (</strong><code>terraform plan</code>):</p>
<ul>
<li><p>This stage generates an execution plan, outlining the changes Terraform will make to the infrastructure. The plan is saved to a file, providing a preview of the modifications before any resources are applied.</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">TerraformTaskV4@4</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Terraform</span> <span class="hljs-string">Plan</span>
    <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">commandOptions:</span> <span class="hljs-string">'-out $(Build.SourcesDirectory)/tfplanfile'</span> <span class="hljs-comment"># this will save the plan to a file name tfplanfile</span>
              <span class="hljs-attr">provider:</span> <span class="hljs-string">'azurerm'</span>
              <span class="hljs-attr">command:</span> <span class="hljs-string">'plan'</span>
              <span class="hljs-attr">environmentServiceNameAzureRM:</span> <span class="hljs-string">'Pay-As-You-Go(4accce4f-9342-4b5d-bf6b-5456d8fa879d)'</span>
</code></pre>
</li>
</ul>
</li>
<li><p><strong>Archiving the</strong> <code>tfstate</code> File:</p>
<ul>
<li><p>Once the Terraform plan is created, the pipeline archives the Terraform state (<code>tfstate</code>) file, which contains the latest state of the infrastructure. This file is crucial for managing the lifecycle of the resources and is stored as an artifact.</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Bash@3</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Install</span> <span class="hljs-string">zip</span> <span class="hljs-string">utility</span>
    <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">targetType:</span> <span class="hljs-string">'inline'</span>
              <span class="hljs-attr">script:</span> <span class="hljs-string">'sudo apt-get update &amp;&amp; sudo apt-get install -y zip'</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">ArchiveFiles@2</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Archive</span> <span class="hljs-string">Files</span>
    <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">rootFolderOrFile:</span> <span class="hljs-string">'$(Build.SourcesDirectory)'</span>
              <span class="hljs-attr">includeRootFolder:</span> <span class="hljs-literal">true</span>
              <span class="hljs-attr">archiveType:</span> <span class="hljs-string">'zip'</span>
              <span class="hljs-attr">archiveFile:</span> <span class="hljs-string">'$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'</span>
              <span class="hljs-attr">replaceExistingArchive:</span> <span class="hljs-literal">true</span>
</code></pre>
</li>
</ul>
</li>
<li><p><strong>Publishing the</strong> <code>tfstate</code> Artifact:</p>
<ul>
<li><p>Finally, the archived <code>tfstate</code> file is published as an artifact, making it available for the release pipeline. This ensures that the release pipeline has access to the accurate and up-to-date state of the infrastructure for further deployment stages.</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">PublishBuildArtifacts@1</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">Publish</span> <span class="hljs-string">Artifact</span>
    <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">PathtoPublish:</span> <span class="hljs-string">'$(Build.ArtifactStagingDirectory)'</span>
              <span class="hljs-attr">ArtifactName:</span> <span class="hljs-string">'$(Build.BuildId)-build'</span>
              <span class="hljs-attr">publishLocation:</span> <span class="hljs-string">'Container'</span>
</code></pre>
</li>
</ul>
</li>
</ol>
<h3 id="heading-end-goal"><strong>End Goal</strong></h3>
<p>The build pipeline culminates in the creation and preservation of the <code>tfstate</code> file, a critical component for managing infrastructure as code (IaC) in Terraform. By archiving and publishing this state file as an artifact, we enable a seamless transition to the release pipeline, where the actual deployment and management of cloud resources will take place.</p>
<p>This approach not only ensures a structured and automated workflow for infrastructure provisioning but also enhances collaboration and reliability by maintaining the integrity of the Terraform state throughout the CI/CD process.</p>
<h2 id="heading-release-pipeline">Release Pipeline</h2>
<p>The release pipeline is designed to take the output generated from the build pipeline—specifically, the Terraform plan encapsulated within the <code>tfstate</code> file—and proceed to the deployment phase. This process ensures that the infrastructure changes are thoroughly reviewed and approved before being applied to the environment.</p>
<h3 id="heading-pipeline-stages-1"><strong>Pipeline Stages:</strong></h3>
<ol>
<li><p><strong>Fetch Artifact from Build Pipeline:</strong></p>
<ul>
<li><p>The release pipeline begins by retrieving the artifact generated in the build pipeline. This artifact contains the Terraform plan and the <code>tfstate</code> file, which captures the desired state of the infrastructure. In the screenshot below, you can see that the build has been configured along with a continuous release trigger.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724086060020/091f78bd-fce2-4506-b0c9-854747984793.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724086508876/541bac43-6d74-428f-996f-1a6c2d605aeb.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Agent Configuration</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724086651560/cbb4c901-d049-49a4-817e-f730171b52eb.png" alt class="image--center mx-auto" /></p>
<p> This task will acquire our self-hosted agent on which it will execute the further terraform operations.</p>
</li>
<li><p><strong>Unarchiving Build Artifact</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724087257105/c68edbc5-ac09-4373-86d6-dd092b899a43.png" alt class="image--center mx-auto" /></p>
<p> At this stage, we are unarchiving the zipped artifacts downloaded from the build pipeline.</p>
</li>
<li><p><strong>Terraform Init</strong></p>
<p> The <code>Terraform Init</code> stage in the release pipeline is crucial for ensuring that Terraform is properly configured and ready to apply the infrastructure changes. This stage is necessary because the release pipeline may be executed on a different machine, container, or pod than the build pipeline. In such cases, Terraform needs to be initialized in the new environment before it can apply the generated artifact.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724087558905/2f759d86-2975-4756-8d1f-62aa9ee8ba6d.png" alt class="image--center mx-auto" /></p>
<p> The rest of the configuration will be the same as the build pipeline.</p>
</li>
<li><p><strong>Apply Terraform Apply:</strong></p>
<ul>
<li><p>After the artifact is fetched, the pipeline is set to run the <code>terraform apply</code> command using the <code>tfstate</code> file. This step will execute the planned changes, provisioning or updating the infrastructure according to the specifications defined in the Terraform code.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724087873982/b6691b61-98b4-46d5-ba46-c15af7c8449a.png" alt class="image--center mx-auto" /></p>
<p>  After the successful initialization of Terraform, the next crucial step in the release pipeline is to apply the infrastructure changes. Since the Terraform plan (<code>tfstate</code> file) has already been downloaded as an artifact from the build pipeline, we can proceed directly to the <code>apply</code> task. This task will apply the changes defined in the plan to the target environment. To streamline the process, we include the <code>--auto-approve</code> flag in the apply command.</p>
</li>
</ul>
</li>
<li><p><strong>Post-Approval Execution:</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724089047332/05fd10f3-0e84-494c-8571-900d0b59fc54.png" alt class="image--center mx-auto" /></p>
<p> Before applying the changes, the pipeline includes an approval gate. This requires a manual review and approval from the designated stakeholders, ensuring that the infrastructure changes are scrutinized and verified before execution. Once the approval is granted, the pipeline proceeds with the application of the Terraform plan.</p>
</li>
</ol>
<h3 id="heading-outcome-1"><strong>Outcome:</strong></h3>
<p>The release pipeline ensures a controlled and automated deployment process. By fetching the artifact from the build pipeline and running the <code>terraform apply</code> post-approval, we maintain a high level of governance and security over infrastructure changes. This setup allows for a smooth and reliable transition from planning to execution, adhering to best practices in continuous deployment and infrastructure as code (IaC).</p>
<h3 id="heading-destroy-stage">Destroy Stage</h3>
<p>After the infrastructure has been successfully deployed, there may be scenarios where we need to clean up the resources or recreate them from scratch. To facilitate this, we’re incorporating a <code>Terraform Destroy</code> stage in the release pipeline. This stage ensures that any unnecessary or outdated infrastructure can be safely and efficiently decommissioned.</p>
<h3 id="heading-purpose-of-the-destroy-stage"><strong>Purpose of the Destroy Stage:</strong></h3>
<ul>
<li><p><strong>Resource Cleanup:</strong> The <code>destroy</code> stage is crucial for cleaning up resources that are no longer needed, helping to minimize costs and maintain a clean environment.</p>
</li>
<li><p><strong>Infrastructure Rebuild:</strong> In cases where you need to recreate the infrastructure, the destroyed stage allows for a complete teardown before the infrastructure is rebuilt, ensuring no residual configurations or resources are left behind.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724088471593/47e4da01-e8a2-4abe-a40f-d2e88d48687e.png" alt class="image--center mx-auto" /></p>
<p>All the stages will remain the same as the Deployment stage, but the apply task will be replaced by Destroy in the Destroy Stage as shown below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724089944710/d559936c-206d-467a-8f53-a463ebcee25c.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-pre-destroy-approval">Pre-Destroy Approval</h3>
<p>An approval stage has been added before destroying the infrastructure so that the approved can review what resources are going to be impacted as part of this destruction.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724088869754/e3b829b1-0cbf-4f9f-a731-8a748040e5a4.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-end-to-end-execution">End-to-End Execution</h1>
<p>I have triggered an end-to-end run that will build the latest artifact, publish the artifact to the release pipeline, and then apply the infrastructure based on the generated plan artifact. Once approved, it can also destroy the same.</p>
<ul>
<li>The plan has been completed after triggering.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724090201230/d9469f04-91bf-43ce-b50c-5cc072bde755.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Post completion, a release has been triggered and waiting for approval</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724090287171/9f87bef0-dde4-47aa-86c9-c61f4763a041.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Post Approval the deployment has been started</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724090353352/706c46da-4916-4712-af94-6ea5b697373b.png" alt class="image--center mx-auto" /></p>
<ul>
<li>And now the infrastructure has been deployed with Terraform apply task completion</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724091548528/0b368ba3-90ca-4149-8a96-06fe8b8ed253.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724091593740/983f56ae-dfc0-4f8a-850f-899b63ccace1.png" alt class="image--center mx-auto" /></p>
<ul>
<li>You can see that 5 resources have been added which are being displayed on Azure Portal</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724091815405/4c829a2d-09e7-4c1d-b47d-1e0154e19562.png" alt class="image--center mx-auto" /></p>
<ul>
<li>After completion, it is now awaiting approval for destruction.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724091694078/18600961-9c07-4131-a5d3-b939000ab7d2.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Once destruction is approved it will again proceed with destruction.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724091758178/146be488-46b3-4329-9818-b46742a55677.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Now you can see below destruction has started</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724091890330/16e409bc-db48-4eee-b202-edb3dc7ca586.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724092140584/15663049-8b9b-4353-bf8c-ed8baabf37a7.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>And finally, after 3.37 minutes, the destruction was completed successfully. This marks the successful completion of our fully automated, end-to-end infrastructure management process, integrated seamlessly with Azure DevOps.</p>
<p>Throughout this journey, we’ve demonstrated how to build, publish, and apply infrastructure artifacts using Terraform within an Azure DevOps pipeline. From initializing Terraform in various environments to handling complex tasks such as artifact archiving, approval workflows, and automated cleanup, we've covered every step necessary to manage infrastructure efficiently and reliably. This process not only ensures that infrastructure changes are executed in a controlled and repeatable manner but also empowers teams to maintain agility and scalability in their cloud environments.</p>
<p>The ability to automate everything from provisioning to destruction, all within a unified pipeline, underscores the power of combining Terraform's infrastructure as code capabilities with the robust CI/CD features of Azure DevOps. This integration enables us to manage our cloud resources with precision, ensuring that deployments are consistent, auditable, and aligned with best practices.</p>
<h1 id="heading-repository">Repository</h1>
<p><strong>Explore the code</strong> and configurations used in this setup on GitHub: <a target="_blank" href="https://github.com/ritesh-kumar-nayak/azuredevops-integration-terraform">Azure DevOps and Terraform Integration</a></p>
]]></content:encoded></item><item><title><![CDATA[Azure DevOps-Release Pipeline | Project]]></title><description><![CDATA[Azure DevOps offers a complete set of tools for managing the entire software development lifecycle. Two key parts of this lifecycle in Azure DevOps are Build Pipelines and Release Pipelines. We have already discussed the Build Pipeline here. In this ...]]></description><link>https://www.devopswithritesh.in/azure-devops-release-pipeline</link><guid isPermaLink="true">https://www.devopswithritesh.in/azure-devops-release-pipeline</guid><category><![CDATA[azure-devops]]></category><category><![CDATA[Devops]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[Azure]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[Azure Pipelines]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Wed, 07 Aug 2024 18:32:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1722743642010/53001c56-a410-4103-a7b7-bbcf30ab9cba.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Azure DevOps offers a complete set of tools for managing the entire software development lifecycle. Two key parts of this lifecycle in Azure DevOps are Build Pipelines and Release Pipelines. We have already discussed the <a target="_blank" href="https://www.devopswithritesh.in/azure-devops-with-project-build-pipeline">Build Pipeline here</a>. In this article, we'll be discussing the Azure-DevOps Release Pipeline.</p>
<h1 id="heading-content">Content</h1>
<ul>
<li><p>Difference between Build and Release pipelines.</p>
</li>
<li><p>Automating Deployments Using Multi-Stage Release Pipeline</p>
</li>
<li><p>Creating a Release Pipeline</p>
</li>
</ul>
<ul>
<li><p>Continuous Deployment Triggers</p>
</li>
<li><p>Pre-Deployment Conditions</p>
</li>
<li><p>Deployment Slots</p>
</li>
<li><p>Integrate Deployment Slots with Stages</p>
</li>
<li><p>Pre-Deployment Approval</p>
</li>
</ul>
<ul>
<li>Reconfigure Build Pipeline with the Release Pipeline</li>
</ul>
<ul>
<li>Blue-Green Deployment Using Swap in Azure App Service</li>
</ul>
<h1 id="heading-build-pipeline-vs-release-pipeline">Build Pipeline vs Release Pipeline</h1>
<p>In Azure DevOps, the Release Pipeline is seldom utilized due to its lack of flexibility in creating pipelines as code. The Release Pipeline primarily relies on the <strong>classic editor</strong> for pipeline creation, which does not align with the modern practices of Infrastructure as Code (IaC). Consequently, most organizations prefer to manage both Continuous Integration (CI) and Continuous Deployment (CD) using the Build Pipeline, leveraging YAML to define and version their pipelines. This approach provides greater control, consistency, and the ability to integrate with source control systems, enhancing the overall DevOps workflow.</p>
<h3 id="heading-build-pipeline">Build Pipeline</h3>
<ol>
<li><p><strong>Purpose</strong>: The primary purpose of a build pipeline is to compile source code, run tests, and produce build artifacts. This is the "build" part of Continuous Integration (CI).</p>
</li>
<li><p><strong>Process</strong>:</p>
<ul>
<li><p><strong>Compilation</strong>: Converts source code into executable code.</p>
</li>
<li><p><strong>Testing</strong>: Runs unit tests to ensure the code is functional and meets quality standards.</p>
</li>
<li><p><strong>Artifact Creation</strong>: Generates and builds artifacts (e.g., binaries, libraries, packages) that can be used in subsequent stages.</p>
</li>
</ul>
</li>
<li><p><strong>Triggers</strong>: Build pipelines are typically triggered by code changes (commits or pull requests) in the source repository.</p>
</li>
<li><p><strong>Output</strong>: The primary output is a set of build artifacts that are stored in a specified location (e.g., Azure Artifacts, a file share).</p>
</li>
</ol>
<h3 id="heading-release-pipeline">Release Pipeline</h3>
<ol>
<li><p><strong>Purpose</strong>: The main objective of a release pipeline is to deploy the build artifacts to various environments (e.g., development, staging, production). This is the "release" part of Continuous Deployment (CD).</p>
</li>
<li><p><strong>Process</strong>:</p>
<ul>
<li><p><strong>Artifact Retrieval</strong>: Retrieves build artifacts from the build pipeline or artifact repository.</p>
</li>
<li><p><strong>Deployment</strong>: Deploys the artifacts to different environments. This may include running scripts, configuring infrastructure, and installing software.</p>
</li>
<li><p><strong>Testing</strong>: This may include additional tests such as integration, performance, or user acceptance tests.</p>
</li>
</ul>
</li>
<li><p><strong>Triggers</strong>: Release pipelines can be triggered manually, on a schedule, or automatically based on the completion of a build pipeline or other criteria.</p>
</li>
<li><p><strong>Output</strong>: The primary output is a deployed application or service in the target environment.</p>
</li>
</ol>
<h3 id="heading-key-differences">Key Differences</h3>
<ol>
<li><p><strong>Focus</strong>:</p>
<ul>
<li><p><strong>Build Pipeline</strong>: Focuses on building and validating code.</p>
</li>
<li><p><strong>Release Pipeline</strong>: Focuses on deploying and verifying applications.</p>
</li>
</ul>
</li>
<li><p><strong>Artifacts</strong>:</p>
<ul>
<li><p><strong>Build Pipeline</strong>: Produces build artifacts.</p>
</li>
<li><p><strong>Release Pipeline</strong>: Consumes build artifacts and deploys them.</p>
</li>
</ul>
</li>
<li><p><strong>Environments</strong>:</p>
<ul>
<li><p><strong>Build Pipeline</strong>: Typically runs in a single, controlled environment (e.g., build server).</p>
</li>
<li><p><strong>Release Pipeline</strong>: Can deploy to multiple environments (e.g., development, staging, production).</p>
</li>
</ul>
</li>
<li><p><strong>Stages</strong>:</p>
<ul>
<li><p><strong>Build Pipeline</strong>: Generally has fewer stages (e.g., compile, test).</p>
</li>
<li><p><strong>Release Pipeline</strong>: This can have multiple stages corresponding to different deployment environments.</p>
</li>
</ul>
</li>
<li><p><strong>Automation</strong>:</p>
<ul>
<li><p><strong>Build Pipeline</strong>: Often fully automated and runs frequently with code changes.</p>
</li>
<li><p><strong>Release Pipeline</strong>: This can be automated but may require manual approval steps, especially for production deployments.</p>
</li>
</ul>
</li>
<li><p><strong>Tools</strong>:</p>
<ul>
<li><p><strong>Build Pipeline</strong>: Uses tools and tasks related to building and testing code.</p>
</li>
<li><p><strong>Release Pipeline</strong>: Uses tools and tasks related to deployment and configuration management.</p>
</li>
</ul>
</li>
</ol>
<h1 id="heading-creating-a-release-pipeline">Creating a Release Pipeline</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723035241929/addc5a7f-2c69-4a3b-88a7-727574092f0f.png" alt class="image--center mx-auto" /></p>
<p>Move to your project inside the organization and under the <strong>Pipeline</strong> section click on the <strong>Releases</strong> and then click on New Pipeline which will bring you to the below classic editor-like page</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723035905334/8820e07b-8ac0-4d77-9259-15c8b25778f1.png" alt class="image--center mx-auto" /></p>
<p>Just like the build pipeline, we can find a variety of templates that we can use directly. For this demo, I'll use the Azure App Service deployment. However, in my current organization, we deploy to a Kubernetes Cluster, and we have a template available called Deploy to a Kubernetes Cluster for the same.</p>
<p>You can apply the template and modify the template accordingly as below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723036186293/dd73fcd2-0a3e-4f72-853a-9950229007cc.png" alt class="image--center mx-auto" /></p>
<p>Once a stage is added, you can see the artifact section</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723036300424/acb2241b-0b31-495a-9b07-217f3b8ae705.png" alt class="image--center mx-auto" /></p>
<p>The artifact section allows us to add the build artifact from the upstream system, artifact repositories, or directly from the build pipeline. When you click on "Add artifact," it will give you several options to choose from where you want to pull the artifacts.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723036689572/07450b31-3358-4b64-88a4-697d38ccee3c.png" alt class="image--center mx-auto" /></p>
<p>Here, we'll proceed with <strong>Build</strong>. We have our build pipeline ready from the <a target="_blank" href="https://www.devopswithritesh.in/azure-devops-with-project-build-pipeline">previous demo</a>. Then, you can choose the project details, source, default version, etc., to configure. By default, we'll always take the latest build artifact.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723037443175/ab60b0d3-ba10-4d32-8b51-b7ed1bc982db.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-continuous-deployment-triggers"><strong>Continuous Deployment Triggers</strong></h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723037656937/23fd673e-115c-410c-8fdb-476ec97234c6.png" alt class="image--center mx-auto" /></p>
<p>Once the artifact is set, the highlighted Thunder symbol represents a continuous deployment trigger. Here, we can configure how we want the deployment pipeline to be triggered. By default, it is disabled.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723038306648/190c6eb9-fe46-44a0-b75d-4b7349b79dcb.png" alt class="image--center mx-auto" /></p>
<p>Once enabled you can set the filters based on which the deployment pipeline will be triggered.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723038486148/7abd867b-ff98-4ecc-91fa-dbe178087bbf.png" alt class="image--center mx-auto" /></p>
<p>There are two options <strong><em>Branch filter and default branch filter.</em> The branch filter</strong> allows you to select multiple branches and when changes are made to those particular branches deployment will be triggered. You can <strong>Include and Exclude</strong> multiple branches</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723038752887/b83a88ce-ae37-4abb-a862-27f6fc90d456.png" alt class="image--center mx-auto" /></p>
<p>However, we'll choose Build <strong>Pipeline's Default Branch as of now</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723038874631/37f6937b-1bcb-4456-b764-6a497f8341cd.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-pull-request-trigger">Pull Request Trigger</h3>
<p>You can also choose <strong>Pull Request Trigger</strong> which allows us to trigger the pipeline when a PR is merged</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723039195090/88252076-0bf6-4a63-9dd2-c409be320bc8.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-scheduled-trigger">Scheduled Trigger</h3>
<p>You can also <strong>Schedule</strong> a trigger which will allow you to trigger a release at a particular time</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723039341566/1a90155e-1087-4e70-b5b4-dadceb7a50bf.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-pre-deployment-conditions">Pre Deployment Conditions</h1>
<p>In Azure DevOps, pre-deployment conditions are used to control when a deployment to a specific environment can proceed in a release pipeline. These conditions help ensure that deployments occur under the right circumstances and meet predefined criteria before moving to the next stage. Here are the main pre-deployment conditions you can configure:</p>
<h3 id="heading-types-of-pre-deployment-conditions">Types of Pre-Deployment Conditions</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723039945224/d1d6236e-21d1-4d72-8ef2-7d7f107764bf.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Approvals</strong>:</p>
<ul>
<li><p><strong>Manual Approvals</strong>: Require one or more users to manually approve the deployment before it proceeds. This is useful for ensuring that a human reviews the deployment details and signs off before the deployment can continue.</p>
</li>
<li><p><strong>Automated Approvals</strong>: Use automated processes or systems to grant approval based on specific criteria or rules.</p>
</li>
</ul>
</li>
<li><p><strong>Gates</strong>:</p>
<ul>
<li><p><strong>Query-Based Gates</strong>: Evaluate queries against external systems or databases to ensure certain conditions are met (e.g., checking the status of work items, and monitoring systems).</p>
</li>
<li><p><strong>Time-Based Gates</strong>: Delay the deployment for a specified period to ensure other processes or dependencies have had time to complete.</p>
</li>
</ul>
</li>
<li><p><strong>Checks</strong>:</p>
<ul>
<li><p><strong>Health Checks</strong>: Integrate with monitoring tools to check the health status of the environment before deployment.</p>
</li>
<li><p><strong>Policy Checks</strong>: Validate compliance with organizational policies or security standards before proceeding.</p>
</li>
</ul>
</li>
<li><p><strong>Artifacts</strong>:</p>
<ul>
<li>Ensure that the required build artifacts are available and meet specific criteria before deployment.</li>
</ul>
</li>
</ol>
<h3 id="heading-configuring-pre-deployment-conditions">Configuring Pre-Deployment Conditions</h3>
<p>To configure pre-deployment conditions in Azure DevOps, follow these steps:</p>
<p>Click on the Thunder icon left to stages, once the preferred trigger is selected you can configure the conditions based on multiple factors like Pre-deployment approvals, Gates, and Deployment queue settings.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723040215817/3613014c-2b92-48b9-9a67-1a980420dd37.png" alt class="image--center mx-auto" /></p>
<p>Here we have added only one condition that says deployment should be triggered when the artifact is from the main branch. You can specify more conditions based on your preference.</p>
<h3 id="heading-example-use-cases">Example Use Cases</h3>
<ul>
<li><p><strong>Manual Approval for Production Deployments</strong>: Before deploying to production, require approval from a manager or a senior engineer to ensure all necessary reviews have been completed.</p>
</li>
<li><p><strong>Health Check Gate</strong>: Implement a health check that verifies the staging environment is stable and all services are running correctly before deploying a new release.</p>
</li>
<li><p><strong>Policy Check</strong>: Ensure that the deployment meets specific security policies, such as passing all security scans or ensuring that no critical vulnerabilities are present.</p>
</li>
</ul>
<h3 id="heading-benefits">Benefits</h3>
<ul>
<li><p><strong>Control</strong>: Pre-deployment conditions provide control over when and how deployments proceed, ensuring they meet predefined standards and criteria.</p>
</li>
<li><p><strong>Compliance</strong>: Helps maintain compliance with organizational policies, security standards, and industry regulations.</p>
</li>
<li><p><strong>Risk Mitigation</strong>: Reduces the risk of deployment failures or issues by ensuring that necessary checks and balances are in place before a deployment proceeds.</p>
</li>
</ul>
<p>By leveraging pre-deployment conditions, organizations can enhance their deployment processes, improve reliability, and ensure that releases meet all requirements before reaching production.</p>
<h1 id="heading-deployment-slots">Deployment Slots</h1>
<p>In Azure DevOps, deployment slots are a feature used primarily in conjunction with Azure App Services. Deployment slots allow you to create different environments (e.g., staging, production) for your web apps, API apps, and mobile app backends, facilitating safer and more controlled releases.</p>
<h2 id="heading-creating-a-deployment-slot">Creating a Deployment Slot</h2>
<p>To use deployment slots with Azure DevOps, you typically follow these steps:</p>
<ul>
<li><p>In the <strong>Azure portal</strong>, navigate to your App Service under the "Deployment" section, and select "Deployment slots".</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723042144059/c29dc90c-2c9c-40ee-aab1-29ddbf835c39.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>We have created a slot for staging, and a DNS has been generated where staging deployments can be accessed. As you can see below, 100% of the traffic is currently being sent to <strong>production</strong> instead of <strong>staging</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723042750779/c0ee93d3-ca70-4447-a5df-e5a78f17c048.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h1 id="heading-integrate-deployment-slots-with-stages">Integrate Deployment Slots with Stages</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723041144228/db7a3ec4-dad9-489b-9ece-48a750d79078.png" alt class="image--center mx-auto" /></p>
<p>This is similar to the classic build pipeline where you can add multiple stages and jobs using a template. Currently, we have only one stage for deploying to the Dev Environment. Once you click on the stage it will land you on the job configuration page.</p>
<h2 id="heading-agent-configuration-step">Agent Configuration Step</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723041515812/ec17a30a-10e4-4780-ae06-d8d4fb193270.png" alt class="image--center mx-auto" /></p>
<p>The first job here will be initializing the agent where this Job will be executed. Here we have taken the agent pool as <strong>Default</strong> because I have added my self-hosted agent to the default. You can choose from multiple options available in the <strong>agent pool</strong>.</p>
<h2 id="heading-deploy-to-app-service">Deploy to App Service</h2>
<p>Once the agent is set, we can add the step for deploying the application to Azure App Service using the template as shown below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723043756685/c2efb5a5-b981-4493-bbb7-cfa08b265713.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723044051848/a7def6ae-70ef-4daa-a6b9-2f8cc20bf1ec.png" alt class="image--center mx-auto" /></p>
<p>Once you select the checkbox for <strong>"Deploy to Slot or App Service Environment,"</strong> you can choose the Resource Group and the desired Slot from the dropdown list. Since I have already created a slot named <strong>staging</strong>, I selected that one.</p>
<p>Finally, you can save your release pipeline to the Dev environment is ready with a stage named <strong>Dev Deployment</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723044510381/95d64c96-2c58-4b72-bb95-293d07e1b050.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-production-stage">Production Stage</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723044826807/c41e9dff-a5f5-4527-a60b-94b0bb696e3a.png" alt class="image--center mx-auto" /></p>
<p>You can click on the <strong>Add</strong> or <strong>Clone</strong> options highlighted respectively to create a different stage for Production and then you can make necessary changes like below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723044965120/194b4630-7918-4f6d-b12a-eba2ccfba9d3.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-configuring-prod-deployment">Configuring Prod Deployment</h1>
<h2 id="heading-pre-deployment-condition-in-prod">Pre-Deployment Condition in Prod</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723045082212/2e5aeaa2-25db-42a0-9cb7-8a06d0015733.png" alt class="image--center mx-auto" /></p>
<p>By default, the conditions are set to <strong>After Stage</strong>, which means the production deployment stage will only be executed after the <strong>Dev Deployment</strong> is completed.</p>
<h2 id="heading-pre-deployment-approval">Pre-Deployment Approval</h2>
<p>Pre-deployment approval in Azure DevOps is a feature that allows you to require one or more people to review and approve a deployment to a specific environment, such as production before it proceeds. This is especially important for production deployments, as it ensures that all necessary checks are in place to prevent potential issues.</p>
<p><strong>Enable Pre-Deployment Approvals</strong>:</p>
<ul>
<li><p>Toggle the "Pre-deployment approvals" switch to enable it.</p>
</li>
<li><p>Add one or more approvers by typing their names or selecting them from the list. You can specify individual users, groups, or service accounts.</p>
</li>
<li><p>Optionally, configure the approval settings such as approval timeout and comments required.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723045418588/1512b70f-5fd5-4c31-bde9-c37d9acc4c78.png" alt class="image--center mx-auto" /></p>
<p>Approval policies in Azure DevOps are essential for ensuring that deployments to critical environments are thoroughly reviewed and meet organizational standards before proceeding. These policies help maintain control, compliance, and accountability within the deployment process.</p>
<h2 id="heading-configuring-stage">Configuring Stage</h2>
<p>We need to configure the production deployment to happen to the production slot instead of the staging slot. To achieve that the Prod Deployment stage configuration will look like the below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723045919659/c3915e8d-4159-4a29-88a5-fc14442d8eee.png" alt class="image--center mx-auto" /></p>
<p>In the Azure portal, we do not create a specific slot for production; it should be deployed directly to the main slot with the default DNS. Therefore, we simply uncheck the <strong>Deploy to Slot or App Service Environment</strong> option, allowing the deployment to happen directly in production, as highlighted below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723046056508/955803bb-13ba-4088-a7bf-d1a5d21c66ba.png" alt class="image--center mx-auto" /></p>
<p>And now save the pipeline.</p>
<h1 id="heading-reconfigure-build-pipeline-with-the-release-pipeline">Reconfigure Build Pipeline with the Release Pipeline</h1>
<p>Earlier, our Build Pipeline code handled both the build and release processes, as shown below:</p>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">trigger:</span>
 <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
<span class="hljs-attr">pool:</span> 
  <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>

<span class="hljs-attr">stages:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Build</span>
    <span class="hljs-attr">jobs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Build</span>
        <span class="hljs-attr">pool:</span>
         <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>
        <span class="hljs-attr">steps:</span>
           <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Npm@1</span>
             <span class="hljs-attr">displayName:</span> <span class="hljs-string">NPM</span> <span class="hljs-string">Install</span>
             <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">command:</span> <span class="hljs-string">'install'</span>
              <span class="hljs-attr">verbose:</span> <span class="hljs-literal">true</span>
           <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Npm@1</span>
             <span class="hljs-attr">inputs:</span>
               <span class="hljs-attr">command:</span> <span class="hljs-string">'custom'</span>
               <span class="hljs-attr">customCommand:</span> <span class="hljs-string">'run build'</span>
           <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">PublishBuildArtifacts@1</span>
             <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">PathtoPublish:</span> <span class="hljs-string">build</span>
              <span class="hljs-attr">ArtifactName:</span> <span class="hljs-string">'drop'</span>

  <span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Deploy</span>
    <span class="hljs-attr">jobs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Deploy</span>
        <span class="hljs-attr">pool:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>
        <span class="hljs-attr">steps:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DownloadBuildArtifacts@1</span>
            <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">buildType:</span> <span class="hljs-string">'current'</span>
              <span class="hljs-attr">downloadType:</span> <span class="hljs-string">'single'</span>
              <span class="hljs-attr">artifactName:</span> <span class="hljs-string">'drop'</span>
              <span class="hljs-attr">downloadPath:</span> <span class="hljs-string">'$(System.ArtifactsDirectory)'</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureRmWebAppDeployment@4</span>
            <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">ConnectionType:</span> <span class="hljs-string">'AzureRM'</span>
              <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'Pay-As-You-Go(4accce4f-9342-4b5d-bf6b-5456d8fa879d)'</span>
              <span class="hljs-attr">appType:</span> <span class="hljs-string">'webAppLinux'</span>
              <span class="hljs-attr">WebAppName:</span> <span class="hljs-string">'youtube-devopswithritesh'</span>
              <span class="hljs-attr">packageForLinux:</span> <span class="hljs-string">'$(System.ArtifactsDirectory)/drop'</span>
              <span class="hljs-attr">RuntimeStack:</span> <span class="hljs-string">'STATICSITE|1.0'</span>
</code></pre>
<p>In the Deploy stage, the build artifact was downloaded and deployed to Azure App Service. This task will now be handled by the Release Pipeline.</p>
<p>Now, the stage can be removed and reconfigured as shown below. The build pipeline will be completed once the artifact is published.</p>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">trigger:</span>
 <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
<span class="hljs-attr">pool:</span> 
  <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>

<span class="hljs-attr">stages:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Build</span>
    <span class="hljs-attr">jobs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Build</span>
        <span class="hljs-attr">pool:</span>
         <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>
        <span class="hljs-attr">steps:</span>
           <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Npm@1</span>
             <span class="hljs-attr">displayName:</span> <span class="hljs-string">NPM</span> <span class="hljs-string">Install</span>
             <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">command:</span> <span class="hljs-string">'install'</span>
              <span class="hljs-attr">verbose:</span> <span class="hljs-literal">true</span>
           <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Npm@1</span>
             <span class="hljs-attr">inputs:</span>
               <span class="hljs-attr">command:</span> <span class="hljs-string">'custom'</span>
               <span class="hljs-attr">customCommand:</span> <span class="hljs-string">'run build'</span>
           <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">PublishBuildArtifacts@1</span>
             <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">PathtoPublish:</span> <span class="hljs-string">build</span>
              <span class="hljs-attr">ArtifactName:</span> <span class="hljs-string">'drop'</span>
</code></pre>
<p>Once pipeline code is saved and pushed, it auto-triggered the build pipeline which is completed as below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052012540/f0f2f8de-8134-4d07-a0f7-918be2659bed.png" alt class="image--center mx-auto" /></p>
<p>You can see the build completed by producing 1 Artifact.</p>
<h1 id="heading-deployment-via-release-pipeline">Deployment via Release Pipeline</h1>
<p>Now right after the build pipeline succeeds, the release pipeline was triggered and was able to fetch the published artifact.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052351914/134801e6-703b-4469-8676-e5ea264c6082.png" alt class="image--center mx-auto" /></p>
<p>As shown above, the deployment to the Dev Environment has begun and succeeded as below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052587275/6e15898e-86ce-4fcc-b317-c4bd5378868a.png" alt class="image--center mx-auto" /></p>
<p>It has been deployed to the Azure App Service <strong><em>Staging</em></strong> <em>deployment slot</em> as configured earlier</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052696625/b18167bd-1bcf-4bcd-b6b4-efb4043dc477.png" alt class="image--center mx-auto" /></p>
<p>Now we are successfully able to access the application in Staging slot as shown and highlighted below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723052894734/36d5b935-cc40-4134-bf0b-2017599d6986.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-deployment-approval-for-production">Deployment Approval for Production</h2>
<p>After the Staging deployment is successful, the deployment to the Production environment enters a pending state, awaiting my approval.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723053052738/d0995620-74e4-4e61-88bb-e663fc1b5f26.png" alt class="image--center mx-auto" /></p>
<p>A mail notification has been triggered for the same</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723053314683/c5528814-69a2-4a43-a9a1-d2a37b906ac8.png" alt class="image--center mx-auto" /></p>
<p>Once you click on approve it will ask for a comment and then you can accept or reject the deployment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723053386765/b7c5d708-c968-4953-9494-f1e745bdb749.png" alt class="image--center mx-auto" /></p>
<p>Post approval the deployment has been started</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723053435706/158ed1ad-9f96-4cb3-8b7e-c64919081ebc.png" alt class="image--center mx-auto" /></p>
<p>And now the deployment has been completed to prod</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723053642130/584fc9c3-55b1-42b7-b8fd-15f61ae2054a.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723053559934/03b74de0-a136-41be-bbf3-c933f385ccb3.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723053618182/c8c99829-943c-465b-bdf5-0f20f4abb824.png" alt class="image--center mx-auto" /></p>
<p>Now instead of Staging, it has been deployed to default Prod DNS.</p>
<h1 id="heading-blue-green-deployment-using-swap-in-azure-app-service">Blue-Green Deployment Using Swap in Azure App Service</h1>
<p>Blue-green deployment is a technique for minimizing downtime and reducing risk by running two identical production environments, referred to as "Blue" and "Green." Azure App Service makes it easy to implement blue-green deployments using its deployment slots feature.</p>
<p>Since we have already set up two deployment slots named <strong>Staging</strong> and <strong>Production</strong>, we can now swap them as shown below.</p>
<ul>
<li><p>Once you are confident that the new version is ready for production, navigate to the "Deployment slots" section of your Azure App Service.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723054416167/764f7db8-f4cb-4f07-9eed-baf46cea658f.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click on the "Swap" button to start the swap operation between the staging slot and the production slot.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723054480734/d5eafa5c-ff69-4b97-ad0b-3f346c097239.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>The swap process exchanges the environment variables and settings between the staging and production slots, making the new version live without any downtime.</p>
</li>
<li><p>Monitor the production environment closely after the swap to ensure everything is functioning correctly.</p>
</li>
</ul>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Azure DevOps with Project-Build Pipeline]]></title><description><![CDATA[Azure DevOps is a suite of development tools Microsoft provides to support the complete software development lifecycle. It encompasses various tools for planning, developing, delivering, and maintaining software projects, enabling teams to collaborat...]]></description><link>https://www.devopswithritesh.in/azure-devops-with-project-build-pipeline</link><guid isPermaLink="true">https://www.devopswithritesh.in/azure-devops-with-project-build-pipeline</guid><category><![CDATA[Devops]]></category><category><![CDATA[Azure]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Pipeline]]></category><category><![CDATA[azure certified]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Mon, 29 Jul 2024 17:52:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1722275331714/e156166e-7f20-49e9-b148-8487b80286da.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Azure DevOps is a suite of development tools Microsoft provides to support the complete software development lifecycle. It encompasses various tools for planning, developing, delivering, and maintaining software projects, enabling teams to collaborate more effectively and deliver high-quality software faster.</p>
<h1 id="heading-contents">Contents</h1>
<ol>
<li><p>Why Azure DevOps</p>
</li>
<li><p>Services Provided by Azure DevOps</p>
</li>
<li><p>Getting Started with Azure DevOps</p>
</li>
<li><p>Azure Repos</p>
</li>
<li><p>Azure DevOps Build Pipeline</p>
</li>
<li><p>Classic Editor Pipeline</p>
</li>
<li><p>YAML Pipeline</p>
</li>
<li><p>Self-Hosted Agent</p>
</li>
</ol>
<h1 id="heading-why-azure-devops">Why Azure DevOps?</h1>
<ul>
<li><p><strong>All in One:</strong> Azure DevOps provides a complete suite of tools for the entire software development lifecycle, including source control, CI/CD pipelines, project management, testing, and artifact management.</p>
</li>
<li><p><strong>Seamless Integration:</strong> All the tools are seamlessly integrated, reducing the need to switch between different tools and ensuring a smooth workflow.</p>
</li>
<li><p><strong>Third-Party Integration:</strong> With a vast extensions marketplace, Azure DevOps can integrate with many third-party tools and services, enhancing its functionality and adaptability.</p>
</li>
<li><p><strong>All-in-one Place:</strong> A single platform for development, project management, and operations enhances team collaboration and reduces communication gaps.</p>
</li>
<li><p><strong>Multi-Cloud Deployments:</strong> While optimized for Azure, it supports deployments to other cloud providers like AWS and Google Cloud.</p>
</li>
</ul>
<h1 id="heading-services-provided-by-azure-devops">Services Provided by Azure DevOps</h1>
<ul>
<li><p><strong>Azure Repos</strong>: Git repositories for source control and version management.</p>
</li>
<li><p><strong>Azure Pipelines</strong>: CI/CD pipelines for automated build, test, and deployment.</p>
</li>
<li><p><strong>Azure Boards</strong>: Agile tools for planning, tracking, and project management.</p>
</li>
<li><p><strong>Azure Artifacts</strong>: Package management for Maven, npm, NuGet, and more.</p>
</li>
<li><p><strong>Azure Test Plans</strong>: Tools for manual and automated testing.</p>
</li>
</ul>
<h1 id="heading-setting-up-azure-devops">Setting up Azure DevOps</h1>
<p>Before setting up your Azure devops, you need to have an account with Microsoft, and with this article, I assume you already have an account, so let's move on to the Azure DevOps: <a target="_blank" href="https://dev.azure.com/">Azure DevOps</a> and sign in with your MS account.</p>
<h2 id="heading-creating-an-organization-and-project">Creating an Organization and Project</h2>
<p>After signing up you need to create an <strong>Organization</strong>, an organization is nothing but a top-level fundamental container that serves as a namespace for managing and organizing your DevOps resources. The organization also facilitates collaboration by allowing multiple teams to work on different projects within the same organizational framework.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722181628752/da05bf1e-08be-45db-94bf-12ba64793ad1.png" alt class="image--center mx-auto" /></p>
<p>Once a captcha authentication is completed, you'll be able to name your organization and create it which further lands you on the project creation page where you can give a name for your project and get started:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722182334498/703925a2-0c29-4c10-becf-3c3bc0fd44e1.png" alt class="image--center mx-auto" /></p>
<p>Now, the project has been created under the organization called Hashnode Demo</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722182438261/b42f9247-d606-4a2e-b83c-4e658b1ba555.png" alt class="image--center mx-auto" /></p>
<p>Within an organization, you can create <strong>multiple projects</strong>. Each project is a container for source code, builds, releases, test plans, and other resources.</p>
<h1 id="heading-azure-repos">Azure Repos</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722188372469/490ed990-531f-4194-9cda-f85e53dc3935.png" alt /></p>
<p>Azure Repos is nothing but a set of version control tools provided by Microsoft as part of the Azure DevOps suite which supports both <strong>Git repositories</strong> and <strong>Team Foundation Version Control(TFVC)</strong> to manage our code base. Azure repo is designed to support any size of the team and enterprise application. The <strong>TFVC</strong> is now obsoleted and <strong>Git-based</strong> repos have been used widely across the industries.</p>
<h3 id="heading-git-repositories">Git Repositories</h3>
<ul>
<li><p>Fully distributed Version control system.</p>
</li>
<li><p>Supports branching, merging, pull requests, and more.</p>
</li>
<li><p>Ideal for teams using modern version control workflows.</p>
</li>
</ul>
<h3 id="heading-tfvcteam-foundation-version-control">TFVC(Team Foundation Version Control)</h3>
<ul>
<li><p>Centralized version control system.</p>
</li>
<li><p>Suitable for teams that prefer a more traditional version control approach.</p>
</li>
<li><p>Supports large codebases and binary files.</p>
</li>
</ul>
<h2 id="heading-features-of-azure-repos">Features of Azure Repos</h2>
<ol>
<li><p><strong>Pull Requests</strong>:</p>
<ul>
<li><p>Facilitates code reviews by enabling team members to comment on and review code changes.</p>
</li>
<li><p>Allows for automated builds and tests to be triggered as part of the pull request process.</p>
</li>
</ul>
</li>
<li><p><strong>Branch Policies</strong>:</p>
<ul>
<li><p>Enforce policies on branches to ensure code quality and compliance.</p>
</li>
<li><p>Require pull request reviews, successful builds, and code coverage before merging.</p>
</li>
</ul>
</li>
<li><p><strong>Code Search</strong>:</p>
<ul>
<li><p>Powerful search capabilities to find code across repositories.</p>
</li>
<li><p>Helps in quickly locating definitions, references, and changes.</p>
</li>
</ul>
</li>
<li><p><strong>Web-Based Code Editing</strong>:</p>
<ul>
<li><p>Edit code directly in the browser without needing a local development environment.</p>
</li>
<li><p>Useful for quick changes and fixes.</p>
</li>
</ul>
</li>
<li><p><strong>Integration with CI/CD Pipelines</strong>:</p>
<ul>
<li><p>Seamlessly integrates with Azure Pipelines for continuous integration and continuous deployment.</p>
</li>
<li><p>Automate builds, tests, and deployments based on repository changes.</p>
</li>
</ul>
</li>
<li><p><strong>Support for Git Hooks</strong>:</p>
<ul>
<li><p>Implement custom scripts to run at different points in the Git workflow.</p>
</li>
<li><p>Automate and enforce development workflows and practices.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-setting-up-azure-repos">Setting up Azure Repos</h2>
<p>Setting up Azure Repos involves creating a new project, initializing a repository, and configuring your development environment to work with it. It also allows you to integrate your existing repositories from Git-Hub or similar platforms. In this article I will be demonstrating 2 ways:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722189012519/c9cf0a89-3213-4cb5-95d1-14b19faa9c3e.png" alt class="image--center mx-auto" /></p>
<ol>
<li><h3 id="heading-clone-to-your-computer-new-repository">Clone to your Computer( New Repository )</h3>
</li>
</ol>
<p>Cloning a new repository to your computer from Azure Repos is a simple process. It allows you to create an empty repository, which you can start using immediately without the need for <strong>explicit initialization</strong>. This process is similar to cloning a repository from GitHub, and it provides a straightforward way to set up your local development environment.</p>
<p>Simply copy the given URL and run the below command</p>
<p><code>git clone</code><a target="_blank" href="https://Org-Hashnode@dev.azure.com/Org-Hashnode/Hashnode%20Demo/_git/Hashnode%20Demo"><code>https://Org-Hashnode@dev.azure.com/Org-Hashnode/Hashnode%20Demo/_git/Hashnode%20Demo</code></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722189427783/954a1a1b-de12-46d4-80fd-2881ad5dca53.png" alt class="image--center mx-auto" /></p>
<p>Once you run the command it will prompt you to authorize via your Microsoft account</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722189532271/685f7ac0-3935-4c7f-98e7-d84a2d2d17eb.png" alt /></p>
<ol start="2">
<li><h3 id="heading-import-a-repository-existing-repository"><strong>Import a repository ( Existing Repository )</strong></h3>
</li>
</ol>
<p>Importing your existing repository from a different platform, such as GitHub, to Azure Repos is a seamless process. This is particularly useful when you want to <strong>migrate</strong> a large codebase and continue development within Azure DevOps.</p>
<p>Click on <strong>Import</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722190043950/b91f5e8c-4fca-4cc0-9921-c517266b4fbc.png" alt class="image--center mx-auto" /></p>
<p>This will further prompt you to choose the repository type out of Git or TFVC and paste your existing repository clone URL</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722190199527/481fcbc8-aa5c-4f60-b5e5-c35e5b368f2c.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722190305664/434286e6-754b-4d6d-9579-8e3d6558c4b1.png" alt /></p>
<p>Now you can copy your existing repository HTTPS clone URL and paste it in the Clone URL box:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722190423947/5ccecaab-6f58-4334-8d83-ed57674de6df.png" alt /></p>
<p>Here we're using a YouTube Clone application based on NodeJS for demonstration purposes that has been forked from <a target="_blank" href="https://github.com/adrianhajdin/project_youtube_clone">Adrianhajdin</a>'s repo.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722190882087/b2abadda-ad29-492e-8b98-74dc6c48286f.png" alt /></p>
<p>Now, click on <strong>Import</strong>. This will migrate your repository to Azure Repos, eliminating the dependency on GitHub. By importing your repository, you ensure that all your code, branches, and commit history are now managed within Azure DevOps, providing a seamless and integrated development environment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722191074149/75e42fd1-5ca3-4be0-813b-7dedac83ef2b.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-azure-pipelines">Azure Pipelines</h1>
<p><strong>Azure Pipelines</strong> is a powerful and versatile service provided by Microsoft Azure DevOps. It enables you to build, test, and deploy your code automatically and continuously, ensuring high-quality software delivery. Azure Pipelines supports a wide range of languages, frameworks, and platforms, making it a comprehensive solution for continuous integration (CI) and continuous deployment (CD). Azure Pipelines provides the flexibility, scalability, and reliability needed to support modern DevOps practices.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722191512476/96fd8efb-de50-4ae8-9215-c5e27f37eb3a.png" alt /></p>
<p>Azure Pipelines can be created using different methods, each catering to various preferences and project requirements. There are two primary ways to create pipelines in Azure DevOps:</p>
<ul>
<li><strong>Classic Editor (Graphical Interface)</strong></li>
</ul>
<ul>
<li><strong>Pipeline as Code (YAML Pipelines)</strong></li>
</ul>
<p>In this article, we'll be covering both ways of creating pipelines.</p>
<h2 id="heading-classic-editor-graphical-interface">Classic Editor (Graphical Interface)</h2>
<p>The Classic Editor in Azure DevOps Pipelines provides a graphical user interface (GUI) for creating and managing build and release pipelines. This method is particularly useful for users who prefer a visual approach or are new to pipeline configuration. The Classic Editor offers an intuitive way to define the steps and stages of your CI/CD process without writing YAML code.</p>
<h3 id="heading-key-features-of-the-classic-editor">Key Features of the Classic Editor</h3>
<ul>
<li><p><strong>Pre-Built Tasks</strong>:</p>
<p>  A library of pre-built tasks and templates to quickly configure common build and deployment scenarios.</p>
</li>
<li><p><strong>Pipeline Stages and Jobs</strong>:</p>
<p>  Visual representation of stages and jobs, making it easier to understand and manage complex workflows.</p>
</li>
<li><p><strong>Variable Management</strong>:</p>
<p>  Easy management of pipeline variables and secrets through the GUI.</p>
</li>
<li><p><strong>Trigger Configuration</strong>:</p>
<p>  Simple setup for continuous integration (CI) and continuous delivery (CD) triggers.</p>
</li>
<li><p><strong>Integrations</strong>:</p>
<p>  Seamless integration with various source control systems, build agents, and deployment environments.</p>
</li>
</ul>
<h3 id="heading-create-pipeline-with-classic-editor">Create Pipeline with Classic Editor</h3>
<p>As we have already configured the Azure Repos now it's time to build and deploy the application using the classic editor which will help us understand the workflow better.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722192845189/1d60f2bc-283a-49b8-9e7e-99434f754a56.png" alt class="image--center mx-auto" /></p>
<p><strong>NB:</strong> By default, the creation of classic build and release pipelines is disabled in Azure DevOps. To enable the Classic Editor option at the project level, you need to make changes at the organization level as shown below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722193625615/31a57b18-2428-4a4b-8055-8b0a1db9cae4.png" alt class="image--center mx-auto" /></p>
<p>Make sure both these options are toggled off to enable the classic editor option at the project level. you will allow the use of the Classic Editor for both build and release pipelines at the project level. This setting ensures that you have access to the Classic Editor’s graphical interface for creating and managing your pipelines.</p>
<p>Now you can see that <strong>Use the classic editor</strong> option is enabled at the project level below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722193968405/3410e47b-2034-4abf-88fe-89ec78f7a07b.png" alt class="image--center mx-auto" /></p>
<p>Now once click/ on the hyperlink it will land you on the page on which the source of your code base needs to be chosen</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722194302436/9f7c22dc-dfcd-495e-8880-0bbabcc0a522.png" alt class="image--center mx-auto" /></p>
<p>We have chosen <strong>Azure Repos Git</strong> as we have already imported our code to Azure Repos. You can also choose other options such as GitHub, Bitbucket Cloud, etc as displayed. The underlying functionality is the same as all of the source options. Click on continue.</p>
<p>Now you can select a template or proceed with an Empty job.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722194382188/7db4fe2a-5adb-4224-93f8-ea696d693320.png" alt class="image--center mx-auto" /></p>
<p>We'll proceed with the <strong>Empty job</strong> as we are going to configure our custom steps. Once clicked, you will be enabled to add <strong>Steps</strong> for your <strong>Job</strong> as shown below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722194960727/bb0675dc-573f-4828-a225-543fb468da9c.png" alt class="image--center mx-auto" /></p>
<p>In this demonstration, the pipeline job is named <strong>"Build and Deploy App"</strong> and encompasses multiple steps. For this basic pipeline example, we handle both the build and release processes within a single pipeline.</p>
<h3 id="heading-configuring-the-agent">Configuring the Agent</h3>
<p>The first step in creating any pipeline is configuring the agent where the pipeline jobs will be executed. An <strong>agent</strong> is essentially a server that performs the tasks required to build the application and generate the necessary artifacts.</p>
<p><strong>Agent Definition</strong>: The agent is a crucial component of the pipeline infrastructure. It executes the tasks defined in the pipeline, such as building the code, running tests, and deploying artifacts.</p>
<p><strong>Agent Pools</strong>: You can use either Microsoft-hosted agents or configure your own self-hosted agents. Microsoft-hosted agents are managed by Azure DevOps and come pre-installed with commonly used tools. Self-hosted agents are machines you set up and configure yourself, offering greater control and customization.</p>
<p><strong>Selecting an Agent</strong>: During pipeline creation, you'll specify which agent or agent pool will be used to run the jobs. This selection determines the environment in which your pipeline tasks will be executed.</p>
<p><strong>NB:</strong> As part of this demonstration, we'll be using self-hosted agents. Creating and configuring self-hosted agents will be demonstrated in later parts of this article.</p>
<h3 id="heading-tasks-of-the-pipelines">Tasks of the Pipelines</h3>
<p>To define tasks in Classic Editor you can take the help of existing templates by searching them in the search bar and choosing the required template that will be added to the left as shown in further steps</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722195753824/eff61f11-00c0-4c9f-8a3d-2526e23ed576.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>npm install</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722195641196/920bc24e-6215-4b31-8ca7-2a00c7adf59d.png" alt class="image--center mx-auto" /></p>
<p> Once the template is chosen it will be added to the left and in the right section you can fill in the necessary fields such as command, working folder, variables, etc.. for that particular task as shown above.</p>
</li>
<li><p><strong>npm build</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722196002848/65661cc5-ecfa-49e2-9fcc-3e0d51b224ba.png" alt class="image--center mx-auto" /></p>
<p> The subsequent task is the <strong>build</strong> task where the application will be built by the command specified as <strong>run build</strong>. These are NodeJS-based command and not related to AZDO</p>
</li>
<li><p><strong>Publish Artifacts</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722196142850/fb69ccf6-ddbe-4d99-939d-6b62f2d0b399.png" alt class="image--center mx-auto" /></p>
<p> Once the application code is built, it generates the deployable artifacts that need to be published to the hosting server.</p>
</li>
<li><p><strong>Deploy to App Service</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722196292388/82ae126f-1c98-40bc-b45b-84c2c6327c83.png" alt class="image--center mx-auto" /></p>
<p> As our artifacts are ready, now using the <strong>Azure App Service Deploy</strong> template, we'll be deploying the artifacts/deployable application to Azure App Service.</p>
</li>
</ol>
<h3 id="heading-run-classic-editor-pipeline">Run Classic Editor Pipeline</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722196583168/e0b9ddfe-4bc9-4f3c-84a1-0584cfbb9fac.png" alt class="image--center mx-auto" /></p>
<p>Now click on <strong>Queue or Save &amp; queue</strong> to trigger the pipeline which will further ask you for the <strong>Agent pool</strong> and <strong>Branch/tag</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722196815464/f118551d-5722-411f-8240-3e915bf386df.png" alt /></p>
<p>As we have taken <strong>self-hosted</strong> agents, I have placed them in the default agent pool hence, <strong>Default</strong> has been chosen.</p>
<p>Once <strong>Run</strong> is clicked the pipeline execution will be queued</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722197009389/cd1834b8-e8e8-4f68-8fc3-804cf7da87c3.png" alt class="image--center mx-auto" /></p>
<p>The execution will begin as below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722197292264/46321a9e-ca7b-4b61-b1a3-95282d256396.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-classic-editor-pipeline-completion">Classic Editor Pipeline Completion</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722197429975/90e6e855-c566-45b5-82e6-c518b1b7b269.png" alt class="image--center mx-auto" /></p>
<p>As shown above, once the pipeline execution is completed, the application has been deployed to <strong>Azure App Service</strong> and we can access the application via the red-bordered URL as shown below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722197624002/de027fcb-6771-4f3b-8b61-5371c34ce8a6.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-pipeline-as-code-yaml-pipelines"><strong>Pipeline as Code (YAML Pipelines)</strong></h2>
<p>A YAML pipeline in Azure DevOps is defined using a YAML (YAML Ain't Markup Language) file. This file contains the pipeline configuration as code, making it easy to version, share, and reuse. YAML pipelines provide a way to automate the build, test, and deployment processes in a structured, text-based format.</p>
<h2 id="heading-why-yaml-over-the-classic-editor">Why YAML Over the Classic Editor?</h2>
<p><strong>Version Control</strong>:</p>
<ul>
<li>YAML pipeline definitions are stored in the same repository as your code. This means the pipeline configuration can be versioned, tracked, and reviewed along with the application code.</li>
</ul>
<p><strong>Flexibility and Customization</strong>:</p>
<ul>
<li>YAML offers greater flexibility for complex workflows, allowing for more granular control over the build and release processes.</li>
</ul>
<p><strong>Infrastructure as Code</strong>:</p>
<ul>
<li>Embraces the "Infrastructure as Code" (IaC) principle, making managing and automating infrastructure alongside application code easier.</li>
</ul>
<p><strong>Collaboration</strong>:</p>
<ul>
<li>Developers can collaborate on pipeline configurations using pull requests and code reviews, improving the quality and maintainability of the CI/CD processes.</li>
</ul>
<h2 id="heading-setting-up-pipeline-as-codeyaml">Setting up Pipeline As Code(YAML)</h2>
<p>Click on the <strong>Create Pipeline</strong> on the project pipeline section which will land you on the below page asking you to choose where exactly your pipeline code resides</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722262710661/2a14e18d-4a9b-41b8-a689-fbbc48592fd1.png" alt class="image--center mx-auto" /></p>
<p>As we are using Azure Repos for this demonstration, the same needs to be selected.</p>
<p>Once source is selected it will as you to configure the pipeline using pipelin-code. Here we have multiple options that can assist us in creating the pipeline with templates, it also allows you to import pipeline code if it already exists.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722262846955/942d2d6d-4a21-4ecd-9372-fb158aa75dff.png" alt class="image--center mx-auto" /></p>
<p>In our case, we'll choose the <strong>Starter pipeline</strong> as we are going to create a pipeline from scratch so starter pipeline will provide a basic format to get started with like below</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722263089815/ac7237f0-f235-4200-8cfe-fcdc07a66427.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-pipeline-as-code-architecture">Pipeline as Code Architecture</h2>
<p><img src="https://learn.microsoft.com/en-us/azure/devops/pipelines/get-started/media/key-concepts-overview.svg?view=azure-devops" alt="key concepts graphic" /></p>
<p>Its architecture is hierarchical, consisting of triggers, stages, jobs, steps, and tasks, each playing a distinct role in the CI/CD process.</p>
<h3 id="heading-key-components-of-a-pipeline">Key Components of a Pipeline</h3>
<ol>
<li><p><strong>Trigger</strong></p>
<p> Triggers define when the pipeline should be run automatically. They can be configured based on various events, such as code commits, pull requests, or on a schedule.</p>
<p> <strong>Types</strong>:</p>
<ul>
<li><p><strong>CI (Continuous Integration) Triggers</strong>: Automatically run the pipeline when code is committed to a branch.</p>
</li>
<li><p><strong>PR (Pull Request) Triggers</strong>: Run the pipeline when a pull request is created or updated.</p>
</li>
<li><p><strong>Scheduled Triggers</strong>: Run the pipeline at specified times.</p>
</li>
<li><p><strong>Resource Triggers</strong>: Run the pipeline based on changes in external resources like container images.</p>
</li>
</ul>
</li>
</ol>
<p>    Example:</p>
<pre><code class="lang-yaml">    <span class="hljs-string">---</span>
    <span class="hljs-attr">trigger:</span>
     <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
    <span class="hljs-attr">pool:</span> 
      <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>
</code></pre>
<ol start="2">
<li><p><strong>Stage</strong></p>
<p> Stages are the major phases of the pipeline, <strong>grouping jobs</strong> that logically belong together. Stages can run sequentially or in parallel.<strong>Features</strong>:</p>
<ul>
<li><p><strong>Isolation</strong>: Each stage can be executed in a different environment.</p>
</li>
<li><p><strong>Dependencies</strong>: Stages can have dependencies, ensuring they run in a specific order.</p>
</li>
</ul>
</li>
</ol>
<pre><code class="lang-yaml">    <span class="hljs-attr">stages:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Build</span>
        <span class="hljs-attr">jobs:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Build</span>
            <span class="hljs-attr">pool:</span>
             <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>
            <span class="hljs-attr">steps:</span>
               <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Npm@1</span>
                 <span class="hljs-attr">displayName:</span> <span class="hljs-string">NPM</span> <span class="hljs-string">Install</span>
                 <span class="hljs-attr">inputs:</span>
                  <span class="hljs-attr">command:</span> <span class="hljs-string">'install'</span>
                  <span class="hljs-attr">verbose:</span> <span class="hljs-literal">true</span>
               <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Npm@1</span>
                 <span class="hljs-attr">inputs:</span>
                   <span class="hljs-attr">command:</span> <span class="hljs-string">'custom'</span>
                   <span class="hljs-attr">customCommand:</span> <span class="hljs-string">'run build'</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Publish_Artifact</span>
            <span class="hljs-attr">displayName:</span> <span class="hljs-string">Publish</span> <span class="hljs-string">Artifact</span>
            <span class="hljs-attr">pool:</span>
              <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>
            <span class="hljs-attr">steps:</span>
              <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">PublishBuildArtifacts@1</span>
                <span class="hljs-attr">inputs:</span>
                  <span class="hljs-attr">PathtoPublish:</span> <span class="hljs-string">'build'</span>
                  <span class="hljs-attr">ArtifactName:</span> <span class="hljs-string">'drop'</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Deploy</span>
        <span class="hljs-attr">jobs:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Deploy</span>
            <span class="hljs-attr">pool:</span>
              <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>
            <span class="hljs-attr">steps:</span>
              <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureRmWebAppDeployment@4</span>
                <span class="hljs-attr">inputs:</span>
                  <span class="hljs-attr">ConnectionType:</span> <span class="hljs-string">'AzureRM'</span>
                  <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'Pay-As-You-Go(4accce4f-9342-4b5d-bf6b-5456d8fa879d)'</span>
                  <span class="hljs-attr">appType:</span> <span class="hljs-string">'webAppLinux'</span>
                  <span class="hljs-attr">WebAppName:</span> <span class="hljs-string">'youtube-devopswithritesh'</span>
                  <span class="hljs-attr">packageForLinux:</span> <span class="hljs-string">$(System.DefaultWorkingDirectory)/build</span>
                  <span class="hljs-attr">RuntimeStack:</span> <span class="hljs-string">'STATICSITE|1.0'</span>
</code></pre>
<ol start="3">
<li><p><strong>Job</strong></p>
<p> Jobs are units of work that run on an agent. Each job consists of a series of steps and can run independently or in sequence with other jobs.</p>
<p> <strong>Features</strong>:</p>
<ul>
<li><p><strong>Parallelism</strong>: Multiple jobs can run in parallel.</p>
</li>
<li><p><strong>Agent Specification</strong>: Jobs specify the agent or pool on which they should run.</p>
</li>
</ul>
</li>
</ol>
<p>    Example:</p>
<pre><code class="lang-yaml">    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Build</span>
            <span class="hljs-attr">pool:</span>
             <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>
            <span class="hljs-attr">steps:</span>
               <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Npm@1</span>
                 <span class="hljs-attr">displayName:</span> <span class="hljs-string">NPM</span> <span class="hljs-string">Install</span>
                 <span class="hljs-attr">inputs:</span>
                  <span class="hljs-attr">command:</span> <span class="hljs-string">'install'</span>
                  <span class="hljs-attr">verbose:</span> <span class="hljs-literal">true</span>
               <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Npm@1</span>
                 <span class="hljs-attr">inputs:</span>
                   <span class="hljs-attr">command:</span> <span class="hljs-string">'custom'</span>
                   <span class="hljs-attr">customCommand:</span> <span class="hljs-string">'run build'</span>
</code></pre>
<ol start="4">
<li><p><strong>Step</strong></p>
<p> Steps are the individual actions performed in a job. Each step represents a single task, such as running a script or executing a command.</p>
<p> <strong>Features</strong>:</p>
<ul>
<li><p><strong>Sequential Execution</strong>: Steps within a job run sequentially.</p>
</li>
<li><p><strong>Conditionals</strong>: Steps can have conditions that determine their execution.</p>
</li>
</ul>
</li>
</ol>
<p>    Example:</p>
<pre><code class="lang-yaml">    <span class="hljs-attr">steps:</span>
               <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Npm@1</span>
                 <span class="hljs-attr">displayName:</span> <span class="hljs-string">NPM</span> <span class="hljs-string">Install</span>
                 <span class="hljs-attr">inputs:</span>
                  <span class="hljs-attr">command:</span> <span class="hljs-string">'install'</span>
                  <span class="hljs-attr">verbose:</span> <span class="hljs-literal">true</span>
               <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Npm@1</span>
                 <span class="hljs-attr">inputs:</span>
                   <span class="hljs-attr">command:</span> <span class="hljs-string">'custom'</span>
                   <span class="hljs-attr">customCommand:</span> <span class="hljs-string">'run build'</span>
</code></pre>
<ol start="5">
<li><p><strong>Task</strong></p>
<p> Tasks are predefined actions or operations that can be executed within a step. Azure DevOps provides a library of built-in tasks, and you can also define custom tasks.</p>
<p> <strong>Types</strong>:</p>
<ul>
<li><p><strong>Built-in Tasks</strong>: Provided by Azure DevOps for common operations like building code, running tests, and deploying applications.</p>
</li>
<li><p><strong>Custom Tasks</strong>: Custom scripts or actions defined by the user.</p>
</li>
</ul>
</li>
</ol>
<p>    Example:</p>
<pre><code class="lang-yaml">    <span class="hljs-attr">steps:</span>
               <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Npm@1</span>
                 <span class="hljs-attr">displayName:</span> <span class="hljs-string">NPM</span> <span class="hljs-string">Install</span>
                 <span class="hljs-attr">inputs:</span>
                  <span class="hljs-attr">command:</span> <span class="hljs-string">'install'</span>
                  <span class="hljs-attr">verbose:</span> <span class="hljs-literal">true</span>
               <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">Npm@1</span>
                 <span class="hljs-attr">inputs:</span>
                   <span class="hljs-attr">command:</span> <span class="hljs-string">'custom'</span>
                   <span class="hljs-attr">customCommand:</span> <span class="hljs-string">'run build'</span>
</code></pre>
<p>While creating pipeline code you can also take the help of pipeline assistance by clicking on Show Assistant which will assist you with pre-built templates for each task.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722266889374/5ccf4c1d-942d-4110-9d89-0e7ebbb21cdc.png" alt class="image--center mx-auto" /></p>
<p>You can also customize the template by putting custom values</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722267023872/811ea72a-eebb-4c17-98d6-f12556c6cf55.png" alt /></p>
<h2 id="heading-pipeline-execution">Pipeline Execution</h2>
<p>Once the pipeline is ready, you can run it by clicking on the <strong>Run</strong> button. If you are running the pipeline for the first time, you might need to authorize the pipeline to access the agent pool, especially if you are using <strong>self-hosted</strong> agents. This authorization ensures that the pipeline has the necessary permissions to utilize the resources provided by the agent pool.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722267849724/fd616dd2-5f7e-4a3a-8ba5-99ce6407615d.png" alt class="image--center mx-auto" /></p>
<p>Now the execution has been started and we have 2 stages that are <strong>Build</strong> and <strong>Deploy</strong> stages respectively.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722272394943/6c02843d-ff1c-44ab-8096-f14b86164b31.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722272622417/d3f85aaf-7278-4c48-bc88-d1f3f43a75fc.png" alt class="image--center mx-auto" /></p>
<p>In the above picture, you can see that the <strong>Build</strong> stage has been completed. However, it is not proceeding to the <strong>Deploy</strong> stage, causing the <strong>Deploy</strong> stage to remain in a pending state indefinitely. This issue occurs because artifacts generated in the <strong>Build</strong> stage are not automatically passed to the <strong>Deploy</strong> stage.</p>
<p>To resolve this, you need to ensure that the artifacts produced in the <strong>Build</strong> stage are available to the <strong>Deploy</strong> stage. This can be achieved by downloading the artifacts from the <strong>Build</strong> stage in the <strong>Deploy</strong> stage.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722273259403/df67ae40-bb55-4cd5-9d10-b64373e98d18.png" alt class="image--center mx-auto" /></p>
<p>Now you can see that another task named <strong>DownloadBuildArtifacts</strong> has been added to the pipeline, which we have implemented with the help of the built-in assistance. The <strong>DownloadBuildArtifacts</strong> task is a template provided by Azure DevOps that simplifies the process of transferring artifacts between stages.</p>
<p>Using the <strong>DownloadBuildArtifacts</strong> task ensures that the artifacts generated in the <strong>Build</strong> stage are available in the <strong>Deploy</strong> stage, facilitating a smooth transition between stages.</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Deploy</span>
    <span class="hljs-attr">jobs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Deploy</span>
        <span class="hljs-attr">pool:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Default</span>
        <span class="hljs-attr">steps:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DownloadBuildArtifacts@1</span>
            <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">buildType:</span> <span class="hljs-string">'current'</span>
              <span class="hljs-attr">downloadType:</span> <span class="hljs-string">'single'</span>
              <span class="hljs-attr">artifactName:</span> <span class="hljs-string">'drop'</span>
              <span class="hljs-attr">downloadPath:</span> <span class="hljs-string">'$(System.ArtifactsDirectory)'</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureRmWebAppDeployment@4</span>
            <span class="hljs-attr">inputs:</span>
              <span class="hljs-attr">ConnectionType:</span> <span class="hljs-string">'AzureRM'</span>
              <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'Pay-As-You-Go(4accce4f-9342-4b5d-bf6b-5456d8fa879d)'</span>
              <span class="hljs-attr">appType:</span> <span class="hljs-string">'webAppLinux'</span>
              <span class="hljs-attr">WebAppName:</span> <span class="hljs-string">'youtube-devopswithritesh'</span>
              <span class="hljs-attr">packageForLinux:</span> <span class="hljs-string">'$(System.ArtifactsDirectory)/drop'</span>
              <span class="hljs-attr">RuntimeStack:</span> <span class="hljs-string">'STATICSITE|1.0'</span>
</code></pre>
<p>Now, after adding the <strong>DownloadBuildArtifacts</strong> task, the <strong>Deploy</strong> stage can access the artifacts generated in the <strong>Build</strong> stage. As a result, the deployment has been successfully completed.</p>
<p>By using the <strong>DownloadBuildArtifacts</strong> task, we ensured that the artifacts produced in the <strong>Build</strong> stage were properly transferred to the <strong>Deploy</strong> stage, enabling the deployment process to proceed without any interruptions.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722273786669/4f46bb3f-9403-4c29-9ea4-84368d7322ad.png" alt class="image--center mx-auto" /></p>
<p>Finally, the application has been deployed and we can access it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722273914708/d046f79c-4562-475c-832d-8d800e3170bf.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-self-hosted-agent">Self-Hosted Agent</h1>
<p>A self-hosted agent is a machine that you manage to run your Azure DevOps pipeline jobs. Unlike Microsoft-hosted agents, which are managed by Azure DevOps, self-hosted agents provide greater control over the environment, tools, and resources used during the build and deployment process.</p>
<h2 id="heading-configuring-self-hosted-agent">Configuring Self-Hosted Agent</h2>
<p><strong>Prepare Your Machine</strong>:</p>
<ul>
<li><p>Ensure your machine meets the prerequisites. It should have an operating system supported by Azure DevOps (Windows, Linux, macOS).</p>
</li>
<li><p>Install necessary dependencies (e.g., .NET Core, Node.js, Docker, etc.) that satisfy your project needs. As our application is based on NodeJS all the Node dependencies have been installed on my machine.</p>
</li>
<li><p>Here I am using an EC2 instance hosted in the AWS cloud.</p>
</li>
</ul>
<p><strong>Create an Agent Pool</strong>:</p>
<ul>
<li><p>Navigate to your Azure DevOps organization.</p>
</li>
<li><p>Click on "Organization settings" (gear icon) in the lower-left corner.</p>
</li>
<li><p>Select "Agent pools" under "Pipelines".</p>
</li>
<li><p>Click "Add pool" to create a new agent pool and give it a name.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722274778198/d2034807-6beb-41d4-bc76-13c3371984ee.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Here I have added my machine to the <strong>Default</strong> pool</p>
<p>  <strong>Download and Configure the Agent</strong>:</p>
<ul>
<li><p>Within the agent pool, click on the "New agent" button.</p>
</li>
<li><p>Choose the operating system of your machine and download the agent package.</p>
</li>
<li><p>Extract the downloaded package to a directory on your machine.</p>
</li>
<li><p>Open a command prompt or terminal and navigate to the directory where you extracted the agent.</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<ul>
<li><p><strong>Run the Configuration Script</strong>:</p>
<ul>
<li><p>Execute the <a target="_blank" href="http://config.sh"><code>config.sh</code></a> script (for Linux/macOS) or <code>config.cmd</code> script (for Windows) and follow the prompts:</p>
<pre><code class="lang-yaml">  <span class="hljs-string">./config.sh</span>
</code></pre>
</li>
<li><p>Provide the server URL (Azure DevOps organization URL).</p>
</li>
<li><p>Enter a Personal Access Token (PAT) with sufficient permissions to register the agent.</p>
</li>
<li><p>Follow the prompts to configure the agent, including setting the agent pool name and agent name.</p>
</li>
</ul>
</li>
</ul>
<p>    <strong>Run the Agent</strong>:</p>
<ul>
<li><p>After configuration, start the agent using the provided command:</p>
<pre><code class="lang-yaml">  <span class="hljs-string">./run.sh</span>
</code></pre>
</li>
<li><p>For Windows, use <code>config.cmd</code> and <code>svc.cmd</code> accordingly.</p>
</li>
</ul>
<p><strong>Verify the Agent</strong>:</p>
<ul>
<li><p>Go back to Azure DevOps and verify that the new agent appears in the agent pool and is online.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722275160757/2019e4fb-a0e6-4773-af06-fc6ceb08dfed.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Cloud State Management by Terraform on AWS]]></title><description><![CDATA[Cloud state management with Terraform refers to the practice of using Terraform, an open-source infrastructure as code (IaC) tool, to create, update, and maintain the infrastructure resources and configurations in a cloud environment. Terraform enabl...]]></description><link>https://www.devopswithritesh.in/cloud-state-management-by-terraform-on-aws</link><guid isPermaLink="true">https://www.devopswithritesh.in/cloud-state-management-by-terraform-on-aws</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[#Terraform #AWS #InfrastructureAsCode #Provisioning #Automation #CloudComputing]]></category><category><![CDATA[terraform-state]]></category><category><![CDATA[cloudautomation]]></category><dc:creator><![CDATA[Ritesh Kumar Nayak]]></dc:creator><pubDate>Sat, 14 Oct 2023 10:44:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1697280112224/b50506cc-2654-4183-b20d-e855ca4256b1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Cloud state management with Terraform refers to the practice of using Terraform, an open-source infrastructure as code (IaC) tool, to create, update, and maintain the infrastructure resources and configurations in a cloud environment. Terraform enables you to define your cloud infrastructure as code in a declarative manner, making it easier to manage and version control your cloud resources.</p>
<h1 id="heading-overview">Overview</h1>
<ul>
<li><p>Terraform setup with backend</p>
</li>
<li><p>Setting up <strong>secure and highly available</strong> VPC.</p>
</li>
<li><p>Provision Beanstalk environment</p>
</li>
<li><p><strong>Provision backend services such as:</strong></p>
<p>  <em>-&gt; RDS</em></p>
<p>  -&gt; <em>Elasticache</em></p>
<p>  -&gt; <em>ActiveMQ</em></p>
</li>
<li><p>Security Group</p>
</li>
<li><p>Keypairs</p>
</li>
<li><p>Bastion Host</p>
</li>
</ul>
<p>So, it's not just about the cloud or infrastructure automation, it's about maintaining the state of the infrastructure in a file.</p>
<h1 id="heading-vpc-architecture">VPC Architecture</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695793545284/30e735a3-051a-4eca-84a5-1af76966464f.png" alt class="image--center mx-auto" /></p>
<p>Here we'll configure Terraform and set up the complete infrastructure stack.</p>
<p>First, we'll create a backend with an S3 bucket to store the state files. State files are not git-ops friendly hence to make them centrally available and accessible across the team, we'll put all the state files in the S3 bucket.</p>
<p>We'll also create a VPC in which The public and private subnets will be distributed across multiple availability zones.</p>
<p>Internet Gateway will be created with route tables in the public subnets so that they can be accessible from the internet publicly.</p>
<p>We'll also keep our backend services in <strong>private subnets</strong> and create <strong>NAT Gateway</strong> so that, they can communicate with the <strong>route table</strong> of <strong>Internet Gateway.</strong></p>
<p>A <strong>Bastion Host</strong> will be created in one of the public subnets that will help us access the private systems present in the private subnets.</p>
<p>Now with all these, we'll achieve a secure and highly available VPC which will be a baseline for our application stack below.</p>
<h1 id="heading-application-stack">Application Stack</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695795972267/e650e39b-986b-4042-99d4-e275931f5f5d.png" alt class="image--center mx-auto" /></p>
<p>Now we'll create the stack over the VPC. Here, we are going to create another S3 bucket for terraform state files.</p>
<p>Terraform going to set up RDS, Elasticache, and Amazon MQ in the private subnet.</p>
<p>Beanstalk load balancers <strong>the Public Subnet</strong> and instances will be placed in private subnets.</p>
<p>To access the infrastructure we'll also take care of the <strong>Security Groups, rules and login keys.</strong></p>
<h1 id="heading-key-pair-for-aws">Key-pair for AWS</h1>
<p>In Terraform, the <code>aws_key_pair</code> resource is used to manage key pairs in AWS. Key pairs are used for securely logging into Amazon EC2 instances. When you launch an EC2 instance, you can specify the name of the key pair to use for SSH access (Linux instances) or RDP access (Windows instances). Key pairs consist of a public key that AWS stores, and a private key file that you store.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"aws_key_pair"</span> <span class="hljs-string">"Vprofile-key"</span> {

  <span class="hljs-string">key_name</span>   <span class="hljs-string">=</span> <span class="hljs-string">"VproProfile-Key-Terra"</span>   <span class="hljs-comment"># this will be the name of the key </span>
  <span class="hljs-string">public_key</span> <span class="hljs-string">=</span> <span class="hljs-string">file(var.PUBLIC_KEY_PATH)</span> <span class="hljs-comment">#key has been created using ssh-keygen and path has been stored in variable file</span>

}
</code></pre>
<h1 id="heading-variable-for-terraform-varstf">Variable for Terraform( <code>vars.tf</code> )</h1>
<p>In Terraform, variable files serve the purpose of organizing and managing the input values used in your Terraform configurations. These input values can vary from environment to environment or from deployment to deployment. Variable files provide a way to keep your main Terraform configuration clean and reusable by separating the input values into separate files.</p>
<pre><code class="lang-yaml"><span class="hljs-string">variable</span> <span class="hljs-string">"AWS_REGION"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"us-east-1"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"AMIs"</span> {
  <span class="hljs-string">type</span> <span class="hljs-string">=</span> <span class="hljs-string">map(any)</span>
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> {
    <span class="hljs-string">us-east-1</span> <span class="hljs-string">=</span> <span class="hljs-string">"ami-053b0d53c279acc90"</span>
    <span class="hljs-string">us-east-2</span> <span class="hljs-string">=</span> <span class="hljs-string">"ami-024e6efaf93d85776"</span>
  }

}

<span class="hljs-string">variable</span> <span class="hljs-string">"PRIVATE_KEY_PATH"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"vprofile-key"</span>

}
<span class="hljs-string">variable</span> <span class="hljs-string">"PUBLIC_KEY_PATH"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"vprofile-key.pub"</span>
}
<span class="hljs-string">variable</span> <span class="hljs-string">"USERNAME"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"ubuntu"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"MY_IP"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"106.221.149.15/32"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"RMQ_USER"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"rabbit"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"RMQ_PASSWORD"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"Pass@780956283"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"DB_USER"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"admin"</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"DB_PASSWORD"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"admin123"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"DB_NAME"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"accounts"</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"INSTANCE_COUNT"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"1"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"VPC_NAME"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"Vprofile-VPC"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"ZONE-1"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"us-east-1a"</span>
}
<span class="hljs-string">variable</span> <span class="hljs-string">"ZONE-2"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"us-east-1b"</span>
}
<span class="hljs-string">variable</span> <span class="hljs-string">"ZONE-3"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"us-east-1c"</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"VPC_CIDER"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"172.21.0.0/16"</span>


}

<span class="hljs-string">variable</span> <span class="hljs-string">"PubSub1_CIDER"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"172.21.1.0/24"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"PubSub2_CIDER"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"172.21.7.0/24"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"PubSub3_CIDER"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"172.21.3.0/24"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"PrivSub1_CIDER"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"172.21.4.0/24"</span>

}

<span class="hljs-string">variable</span> <span class="hljs-string">"PrivSub2_CIDER"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"172.21.5.0/24"</span>
}

<span class="hljs-string">variable</span> <span class="hljs-string">"PrivSub3_CIDER"</span> {
  <span class="hljs-string">default</span> <span class="hljs-string">=</span> <span class="hljs-string">"172.21.6.0/24"</span>

}
</code></pre>
<h1 id="heading-backend-backendtf">Backend( <code>backend.tf</code> )</h1>
<p>In Terraform, the <a target="_blank" href="http://backend.tf"><code>backend.tf</code></a> file is used to configure the backend where Terraform stores its state files. The state file is a crucial component in Terraform, as it keeps track of the real-world resources Terraform manages. By default, Terraform stores the state file locally in the same directory as your configuration files (<code>terraform.tfstate</code>). However, in a production environment or in collaboration with a team, using a remote backend is recommended. This is where <a target="_blank" href="http://backend.tf"><code>backend.tf</code></a> comes into play.</p>
<p><a target="_blank" href="http://backend.tf"><code>backend.tf</code></a> allows you to specify where the Terraform state file should be stored. Common backends include Amazon S3, Azure Blob Storage, Google Cloud Storage, and HashiCorp Consul. Here we are using <strong>S3</strong> as our backend.</p>
<pre><code class="lang-yaml"><span class="hljs-string">terraform</span> {
  <span class="hljs-string">backend</span> <span class="hljs-string">"s3"</span> {
    <span class="hljs-string">bucket</span> <span class="hljs-string">=</span> <span class="hljs-string">"terraform--vpro-27sept"</span>
    <span class="hljs-string">key</span>    <span class="hljs-string">=</span> <span class="hljs-string">"terraform/backend"</span>
    <span class="hljs-string">region</span> <span class="hljs-string">=</span> <span class="hljs-string">"us-east-1"</span>
  }
}
</code></pre>
<h2 id="heading-terraform-module-vs-terraform-resource">Terraform Module vs Terraform Resource</h2>
<p>In Terraform, both <strong>resources</strong> and <strong>modules</strong> are essential constructs, but they serve different purposes in the infrastructure-as-code paradigm. Here are the key differences between resources and modules in Terraform:</p>
<h3 id="heading-resources"><strong>Resources:</strong></h3>
<ol>
<li><p><strong>Definition:</strong></p>
<ul>
<li><strong>Resource:</strong> A resource in Terraform represents a real-world infrastructure object, such as an AWS EC2 instance, a VPC, or a DNS record. Resources are the fundamental building blocks of Terraform configurations. Each resource block describes one or more infrastructure objects.</li>
</ul>
</li>
<li><p><strong>Use Case:</strong></p>
<ul>
<li><strong>Resource:</strong> Resources are used to create, update, and delete infrastructure components. When you define a resource, Terraform manages the lifecycle of that resource, ensuring it exists, and is configured correctly.</li>
</ul>
</li>
<li><p><strong>Example:</strong></p>
<pre><code class="lang-yaml"> <span class="hljs-string">hclCopy</span> <span class="hljs-string">coderesource</span> <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"example"</span> {
   <span class="hljs-string">ami</span>           <span class="hljs-string">=</span> <span class="hljs-string">"ami-0c55b159cbfafe1f0"</span>
   <span class="hljs-string">instance_type</span> <span class="hljs-string">=</span> <span class="hljs-string">"t2.micro"</span>
 }
</code></pre>
</li>
<li><p><strong>Reusability:</strong></p>
<ul>
<li>Resources are not inherently reusable on their own. However, you can create modules to encapsulate and reuse resource configurations.</li>
</ul>
</li>
</ol>
<h3 id="heading-modules"><strong>Modules:</strong></h3>
<ol>
<li><p><strong>Definition:</strong></p>
<ul>
<li><strong>Module:</strong> A module in Terraform is a self-contained collection of resources and other configurations. Modules allow you to group resources together, encapsulate logic, and promote reusability. Modules can be reused across different Terraform configurations.</li>
</ul>
</li>
<li><p><strong>Use Case:</strong></p>
<ul>
<li><strong>Module:</strong> Modules are used to organize and abstract Terraform configurations. You can create modules for specific tasks, such as creating a VPC with related resources, and reuse these modules across different projects or environments.</li>
</ul>
</li>
<li><p><strong>Example:</strong></p>
<pre><code class="lang-yaml"> <span class="hljs-string">hclCopy</span> <span class="hljs-string">codemodule</span> <span class="hljs-string">"vpc"</span> {
   <span class="hljs-string">source</span> <span class="hljs-string">=</span> <span class="hljs-string">"./modules/vpc"</span>

   <span class="hljs-string">vpc_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"my-vpc"</span>
   <span class="hljs-string">subnet_cidr_blocks</span> <span class="hljs-string">=</span> [<span class="hljs-string">"10.0.1.0/24"</span>, <span class="hljs-string">"10.0.2.0/24"</span>]
 }
</code></pre>
</li>
<li><p><strong>Reusability:</strong></p>
<ul>
<li>Modules promote reusability by allowing you to package and share configurations. You can use modules to create reusable, shareable components that can be employed across various projects.</li>
</ul>
</li>
</ol>
<p>In practice, you often use modules to structure your Terraform code, making it more manageable and facilitating collaboration. Modules help in organizing your resources and configurations in a structured and reusable manner, leading to more maintainable and scalable infrastructure-as-code projects.</p>
<h1 id="heading-creating-aws-vpc-vpctf">Creating AWS VPC ( <code>vpc.tf</code> )</h1>
<p>Now as we have already created the vars.tf and backend.tf, now it's time to create our resources using the variable. The first resource we'll create as part of our infrastructure is VPC and subnets in the VPC that create security for our infrastructure.</p>
<p>Here we have implemented the <strong>vpc module</strong> of Terraform from the <a target="_blank" href="https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/latest/examples/complete#module_vpc">Terraform registry</a>.</p>
<pre><code class="lang-yaml"><span class="hljs-string">module</span> <span class="hljs-string">"vpc"</span> {
  <span class="hljs-string">source</span> <span class="hljs-string">=</span> <span class="hljs-string">"terraform-aws-modules/vpc/aws"</span>

  <span class="hljs-string">name</span>                 <span class="hljs-string">=</span> <span class="hljs-string">var.VPC_NAME</span> <span class="hljs-comment">#please refer the terraform registry of  https://github.com/terraform-aws-modules/terraform-aws-vpc/blob/v5.1.2/examples/complete/main.tf</span>
  <span class="hljs-string">cidr</span>                 <span class="hljs-string">=</span> <span class="hljs-string">var.VPC_CIDER</span>
  <span class="hljs-string">azs</span>                  <span class="hljs-string">=</span> [<span class="hljs-string">var.ZONE-1</span>, <span class="hljs-string">var.ZONE-2</span>, <span class="hljs-string">var.ZONE-3</span>]
  <span class="hljs-string">public_subnets</span>       <span class="hljs-string">=</span> [<span class="hljs-string">var.PubSub1_CIDER</span>, <span class="hljs-string">var.PubSub2_CIDER</span>, <span class="hljs-string">var.PubSub3_CIDER</span>]
  <span class="hljs-string">private_subnets</span>      <span class="hljs-string">=</span> [<span class="hljs-string">var.PrivSub1_CIDER</span>, <span class="hljs-string">var.PrivSub2_CIDER</span>, <span class="hljs-string">var.PrivSub3_CIDER</span>]
  <span class="hljs-string">enable_nat_gateway</span>   <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
  <span class="hljs-string">single_nat_gateway</span>   <span class="hljs-string">=</span> <span class="hljs-literal">true</span> <span class="hljs-comment"># as we have multiple private subnet it will create multiple NAT gateway which will be expenssive so, we are adding this attribute.</span>
  <span class="hljs-string">enable_dns_support</span>   <span class="hljs-string">=</span> <span class="hljs-literal">true</span>
  <span class="hljs-string">enable_dns_hostnames</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>

  <span class="hljs-string">tags</span> <span class="hljs-string">=</span> {
    <span class="hljs-string">terraform</span>   <span class="hljs-string">=</span> <span class="hljs-string">"True"</span>
    <span class="hljs-string">Environment</span> <span class="hljs-string">=</span> <span class="hljs-string">"Prod"</span>
  }

  <span class="hljs-string">vpc_tags</span> <span class="hljs-string">=</span> {
    <span class="hljs-string">Name</span> <span class="hljs-string">=</span> <span class="hljs-string">var.VPC_NAME</span>
  }


}
</code></pre>
<h1 id="heading-creating-security-groups">Creating Security Groups</h1>
<p>Here we are creating the Security Groups for all the resources that we are going to create such as Beanstalk LoabBalancer, Bastion Host, instances created by Beanstalk, security groups for backend services such as RDS, ActiveMQ, Elasticache, etc..</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"vpro-beanstalk-elb-sg"</span> {
  <span class="hljs-string">name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"vpro-beanstalk-elb-sg"</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Security grup for Beanstalk Elastic Loadbalancer"</span>
  <span class="hljs-comment">#security group has to be a part of VPC so VPC_ID is mandatory</span>
  <span class="hljs-string">vpc_id</span> <span class="hljs-string">=</span> <span class="hljs-string">module.vpc.vpc_id</span>

  <span class="hljs-string">egress</span> { <span class="hljs-comment"># Outbound Rule</span>
    <span class="hljs-string">from_port</span>   <span class="hljs-string">=</span> <span class="hljs-number">0</span>
    <span class="hljs-string">protocol</span>    <span class="hljs-string">=</span> <span class="hljs-string">"-1"</span> <span class="hljs-comment"># -1 means all the protocol</span>
    <span class="hljs-string">to_port</span>     <span class="hljs-string">=</span> <span class="hljs-number">0</span>
    <span class="hljs-string">cidr_blocks</span> <span class="hljs-string">=</span> [<span class="hljs-string">"0.0.0.0/0"</span>] <span class="hljs-comment"># allow outbound to go anywhere</span>
  }

  <span class="hljs-string">ingress</span> {
    <span class="hljs-string">from_port</span>   <span class="hljs-string">=</span> <span class="hljs-number">80</span>    <span class="hljs-comment"># Allowing access from port 80</span>
    <span class="hljs-string">protocol</span>    <span class="hljs-string">=</span> <span class="hljs-string">"tcp"</span> <span class="hljs-comment"># here the protocol is only tcp</span>
    <span class="hljs-string">to_port</span>     <span class="hljs-string">=</span> <span class="hljs-number">80</span>
    <span class="hljs-string">cidr_blocks</span> <span class="hljs-string">=</span> [<span class="hljs-string">"0.0.0.0/0"</span>]
  }


}

<span class="hljs-string">resource</span> <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"vpro-bastionHost-sg"</span> {
  <span class="hljs-string">name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"vpro-bastionHost-sg"</span>
  <span class="hljs-string">vpc_id</span>      <span class="hljs-string">=</span> <span class="hljs-string">module.vpc.vpc_id</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Security group for Bastion host"</span>

  <span class="hljs-string">egress</span> {
    <span class="hljs-string">from_port</span>   <span class="hljs-string">=</span> <span class="hljs-number">0</span>
    <span class="hljs-string">protocol</span>    <span class="hljs-string">=</span> <span class="hljs-string">"-1"</span>
    <span class="hljs-string">to_port</span>     <span class="hljs-string">=</span> <span class="hljs-number">0</span>
    <span class="hljs-string">cidr_blocks</span> <span class="hljs-string">=</span> [<span class="hljs-string">"0.0.0.0/0"</span>]
  }

  <span class="hljs-string">ingress</span> {
    <span class="hljs-string">from_port</span>   <span class="hljs-string">=</span> <span class="hljs-number">22</span>
    <span class="hljs-string">protocol</span>    <span class="hljs-string">=</span> <span class="hljs-string">"tcp"</span>
    <span class="hljs-string">to_port</span>     <span class="hljs-string">=</span> <span class="hljs-number">22</span>
    <span class="hljs-string">cidr_blocks</span> <span class="hljs-string">=</span> [<span class="hljs-string">var.MY_IP</span>]
  }


}

<span class="hljs-comment"># Now we'll create the Security Group for EC2 instance in our Beanstalk Environment</span>
<span class="hljs-comment"># This security group will be attached to the EC2 instances created by beanstalk3</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"vpro-prod-sg"</span> {
  <span class="hljs-string">name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"vpro-prod-sg"</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Security group for Beanstalk instances"</span>
  <span class="hljs-string">vpc_id</span>      <span class="hljs-string">=</span> <span class="hljs-string">module.vpc.vpc_id</span>

  <span class="hljs-string">egress</span> { <span class="hljs-comment"># Outbound Rule</span>
    <span class="hljs-string">from_port</span>   <span class="hljs-string">=</span> <span class="hljs-number">0</span>
    <span class="hljs-string">to_port</span>     <span class="hljs-string">=</span> <span class="hljs-number">0</span>
    <span class="hljs-string">protocol</span>    <span class="hljs-string">=</span> <span class="hljs-string">"-1"</span>
    <span class="hljs-string">cidr_blocks</span> <span class="hljs-string">=</span> [<span class="hljs-string">"0.0.0.0/0"</span>]
  }
  <span class="hljs-comment"># here we are allowing only the bastion Host to access the instances using port 22(SSH) so, we are allowing the traffic from bastionHost security group here</span>

  <span class="hljs-string">ingress</span> {
    <span class="hljs-string">from_port</span>       <span class="hljs-string">=</span> <span class="hljs-number">22</span>
    <span class="hljs-string">to_port</span>         <span class="hljs-string">=</span> <span class="hljs-number">22</span>
    <span class="hljs-string">protocol</span>        <span class="hljs-string">=</span> <span class="hljs-string">"tcp"</span>
    <span class="hljs-string">security_groups</span> <span class="hljs-string">=</span> [<span class="hljs-string">aws_security_group.vpro-bastionHost-sg.id</span>] <span class="hljs-comment"># So, only the bastionHost has the access to beanstalk EC2 instances at private port 22 and as we know bastion host can only be accessed from "MyIP" so we are having a tight control over security</span>
  }
}

<span class="hljs-comment"># Now we'll create the security group for our backend services such as RDS, Elasticache, ActiveMQ </span>

<span class="hljs-string">resource</span> <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"vpro-backend-sg"</span> {

  <span class="hljs-string">name</span>        <span class="hljs-string">=</span> <span class="hljs-string">"vpro-backend-sg"</span>
  <span class="hljs-string">description</span> <span class="hljs-string">=</span> <span class="hljs-string">"Securiy group for backend services such as RDS, ActiveMQ and Elasticache "</span>
  <span class="hljs-string">vpc_id</span>      <span class="hljs-string">=</span> <span class="hljs-string">module.vpc.vpc_id</span>

  <span class="hljs-string">egress</span> {
    <span class="hljs-string">from_port</span>   <span class="hljs-string">=</span> <span class="hljs-number">0</span>
    <span class="hljs-string">to_port</span>     <span class="hljs-string">=</span> <span class="hljs-number">0</span>
    <span class="hljs-string">protocol</span>    <span class="hljs-string">=</span> <span class="hljs-string">"-1"</span>
    <span class="hljs-string">cidr_blocks</span> <span class="hljs-string">=</span> [<span class="hljs-string">"0.0.0.0/0"</span>]
  }

  <span class="hljs-comment"># Here we are allowing access to all the protocol from all the ports.</span>
  <span class="hljs-string">ingress</span> {
    <span class="hljs-string">from_port</span>       <span class="hljs-string">=</span> <span class="hljs-number">0</span>
    <span class="hljs-string">to_port</span>         <span class="hljs-string">=</span> <span class="hljs-number">0</span>
    <span class="hljs-string">protocol</span>        <span class="hljs-string">=</span> <span class="hljs-string">"-1"</span>
    <span class="hljs-string">security_groups</span> <span class="hljs-string">=</span> [<span class="hljs-string">aws_security_group.vpro-prod-sg.id</span>] <span class="hljs-comment"># Beanstalk instances where our application will run can access the backends</span>
  }

  <span class="hljs-string">ingress</span> <span class="hljs-string">=</span> {
    <span class="hljs-string">from_port</span> <span class="hljs-string">=</span> <span class="hljs-number">3306</span>
    <span class="hljs-string">to_port</span> <span class="hljs-string">=</span> <span class="hljs-number">3306</span>
    <span class="hljs-string">protocol</span> <span class="hljs-string">=</span> <span class="hljs-string">tcp</span>
    <span class="hljs-string">security_groups</span> <span class="hljs-string">=</span> [<span class="hljs-string">aws_security_group.vpro-bastionHost-sg.id</span>]
  }

}

<span class="hljs-comment"># Now as the backend security group has been created, the bakend services should be able to interact with each other.</span>
<span class="hljs-comment"># To make them interact with each other, we have to allow the "vpro-backend-sg" to access itself (vpro-backend-sg)</span>
<span class="hljs-comment"># To make it happen we'll use "aws_security_group_rule" resource</span>

<span class="hljs-string">resource</span> <span class="hljs-string">"aws_security_group_rule"</span> <span class="hljs-string">"security-group-allow-itself"</span> {

  <span class="hljs-string">type</span>                     <span class="hljs-string">=</span> <span class="hljs-string">"ingress"</span> <span class="hljs-comment"># Updating the inbound rule so type is ingress</span>
  <span class="hljs-string">from_port</span>                <span class="hljs-string">=</span> <span class="hljs-number">0</span>
  <span class="hljs-string">to_port</span>                  <span class="hljs-string">=</span> <span class="hljs-number">65535</span> <span class="hljs-comment"># To all the ports</span>
  <span class="hljs-string">protocol</span>                 <span class="hljs-string">=</span> <span class="hljs-string">"tcp"</span>
  <span class="hljs-string">security_group_id</span>        <span class="hljs-string">=</span> <span class="hljs-string">aws_security_group.vpro-backend-sg.id</span> <span class="hljs-comment"># id of the security group that you want to update</span>
  <span class="hljs-string">source_security_group_id</span> <span class="hljs-string">=</span> <span class="hljs-string">aws_security_group.vpro-backend-sg.id</span> <span class="hljs-comment"># From which SG id you want to allow the access</span>
  <span class="hljs-comment"># Here we want to allow backend sg to acces backend sg itself hence security_group_id &amp; source_security_group_id</span>
}
</code></pre>
<p>Now, our Security Group is created and ready to be attached with it's respective Instances.</p>
<h1 id="heading-backend-services">Backend Services</h1>
<h2 id="heading-db-subnet-group"><em>DB Subnet Group</em></h2>
<p>First of all, we have to create a DB subnet group. A <strong>DB Subnet Group</strong> is a collection of subnets that you can choose to use when you create a DB instance in a Virtual Private Cloud (VPC).</p>
<p>When you launch a database instance in a VPC, you need to <strong><em>specify the subnets where the DB instance will be placed</em></strong>. A DB Subnet Group allows you to specify which subnets within your VPC the database can use. This provides you with control over the network configuration of your DB instances.</p>
<p>It is a way to define and manage the subnets within your VPC where your DB instances will be deployed. It provides the necessary network configuration to ensure the availability, security, and isolation of your database instances in a VPC environment.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"aws_db_subnet_group"</span> <span class="hljs-string">"vpro-db-subnet-group"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"main"</span>
    <span class="hljs-string">subnet_ids</span> <span class="hljs-string">=</span> [<span class="hljs-string">module.vpc.private_subnets</span>[<span class="hljs-number">0</span>], <span class="hljs-string">module.vpc.private_subnets</span>[<span class="hljs-number">1</span>], <span class="hljs-string">module.vpc.private_subnets</span>[<span class="hljs-number">2</span>]]
    <span class="hljs-comment"># RDS will be in the subnet groups which is part of 3 subnet ids or collection of 3 subnet ids</span>

    <span class="hljs-string">tags</span> <span class="hljs-string">=</span> {
        <span class="hljs-string">Name=</span> <span class="hljs-string">"Subnet Group for RDS"</span>
    }

}
</code></pre>
<h2 id="heading-elasticache-subnet-group">ElastiCache <em>Subnet Group</em></h2>
<p>Amazon ElastiCache Subnet Groups are used to specify the subnets in your Amazon Virtual Private Cloud (Amazon VPC) where you want to create your Amazon ElastiCache clusters. Similar to other AWS services like RDS (Relational Database Service), Amazon ElastiCache operates within the confines of specific subnets defined by a Subnet Group.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"aws_elasticache_subnet_group"</span> <span class="hljs-string">"vpro-elasticache-subnet-group"</span> {
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"Vpro-ecache-subnetgroup"</span>
    <span class="hljs-string">subnet_ids</span> <span class="hljs-string">=</span> [<span class="hljs-string">module.vpc.private_subnets</span>[<span class="hljs-number">0</span>], <span class="hljs-string">module.vpc.private_subnets</span>[<span class="hljs-number">1</span>], <span class="hljs-string">module.vpc.private_subnets</span>[<span class="hljs-number">2</span>]]
    <span class="hljs-string">tags</span> <span class="hljs-string">=</span> {
      <span class="hljs-string">Name=</span> <span class="hljs-string">"Subnet Group for Elasticache"</span>
    }
}
</code></pre>
<h2 id="heading-rds-db-instance">RDS DB Instance</h2>
<p>Now, it's time to define to define and provision a new Amazon RDS database instance. Amazon RDS (Relational Database Service) is a managed database service provided by AWS that supports multiple database engines such as MySQL, PostgreSQL, MariaDB, Oracle, and Microsoft SQL Server.</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"aws_db_instance"</span> <span class="hljs-string">"vpro-rds"</span> {

    <span class="hljs-string">allocated_storage</span> <span class="hljs-string">=</span> <span class="hljs-number">20</span>
    <span class="hljs-string">storage_type</span> <span class="hljs-string">=</span> <span class="hljs-string">"gp2"</span>
    <span class="hljs-string">engine</span> <span class="hljs-string">=</span> <span class="hljs-string">"mysql"</span>
    <span class="hljs-string">engine_version</span> <span class="hljs-string">=</span> <span class="hljs-string">"5.6.34"</span>
    <span class="hljs-string">instance_class</span> <span class="hljs-string">=</span> <span class="hljs-string">"db.t2.micro"</span>
    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">var.DB_NAME</span>
    <span class="hljs-string">username</span> <span class="hljs-string">=</span> <span class="hljs-string">var.DB_USER</span>
    <span class="hljs-string">password</span> <span class="hljs-string">=</span> <span class="hljs-string">var.DB_PASSWORD</span>
    <span class="hljs-string">parameter_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"default.mysql5.6"</span>
    <span class="hljs-string">multi_az</span> <span class="hljs-string">=</span> <span class="hljs-literal">false</span>                          <span class="hljs-comment"># keep it true for high avalability</span>
    <span class="hljs-string">publicly_accessible</span> <span class="hljs-string">=</span> <span class="hljs-literal">false</span>               <span class="hljs-comment"># We do not want it to be accessible from public as it will be accessible by Elastic Beanstalk</span>
    <span class="hljs-string">skip_final_snapshot</span> <span class="hljs-string">=</span> <span class="hljs-literal">true</span>                <span class="hljs-comment"># This will keep creating snapshots of the RDS that can be used while recovering the if deleted which will be very much expenssive for us. But it is recomended to keep it "true" for production grade infra</span>
    <span class="hljs-string">db_subnet_group_name</span> <span class="hljs-string">=</span> <span class="hljs-string">aws_db_subnet_group.vpro-db-subnet-group.name</span>
    <span class="hljs-string">vpc_security_group_ids</span> <span class="hljs-string">=</span> [<span class="hljs-string">aws_security_group.vpro-backend-sg.id</span>]

}
</code></pre>
<h1 id="heading-elastic-beanstalk">Elastic Beanstalk</h1>
<p>AWS Elastic Beanstalk automatically provisions and manages the underlying Amazon EC2 instances for your application. When you deploy an application to Elastic Beanstalk, it handles the details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. You simply need to upload your application code and Elastic Beanstalk takes care of the deployment, scaling, and maintenance of the infrastructure for you.</p>
<p>Now, the final one is setting up the beanstalk. To setup beanstalk. We are going to create 2 file.</p>
<ul>
<li><p>One is application.tf</p>
</li>
<li><p>Inside that, we create the environment</p>
</li>
</ul>
<h2 id="heading-applicationtf"><code>application.tf</code></h2>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"aws_elastic_beanstalk_application"</span> <span class="hljs-string">"vpro-prod"</span> {

    <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"Vprofile-prod"</span>

}
</code></pre>
<p><code>resource "aws_elastic_beanstalk_application"</code>: This line declares an Elastic Beanstalk application resource in Terraform. It tells Terraform that you want to manage an Elastic Beanstalk application. When you apply this Terraform configuration using the <code>terraform apply</code> command, it will create an Elastic Beanstalk application named "Vprofile-prod" in your AWS account.</p>
<h2 id="heading-beanstalk-environment-setup">Beanstalk Environment Setup</h2>
<p>Now, we are going to set the Elastic Beanstalk and we'll do all the necessary settings so that it will launch our EC2 instances</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"aws_elastic_beanstalk_environment"</span> <span class="hljs-string">"Vpro-bean-prod"</span> {
  <span class="hljs-string">name</span>                <span class="hljs-string">=</span> <span class="hljs-string">"Vpro-bean-prod"</span>
  <span class="hljs-string">application</span>         <span class="hljs-string">=</span> <span class="hljs-string">aws_elastic_beanstalk_application.vpro-prod.name</span>
  <span class="hljs-string">solution_stack_name</span> <span class="hljs-string">=</span> <span class="hljs-string">"64bit Amazon Linux 2 v4.3.12 running Tomcat 8.5 Corretto 11"</span> <span class="hljs-comment"># There are multiple solution stacks such as Tomcat, Docker, Go, Node.JS etc. that we can find on Documentation. Our prefered is Tomcat</span>
  <span class="hljs-string">cname_prefix</span>        <span class="hljs-string">=</span> <span class="hljs-string">"Vpro-bean-prod-domain"</span>                                       <span class="hljs-comment"># this will be the url</span>

  <span class="hljs-comment"># Elastic Beanstalk has a lot of settings for best use, now we'll define all the settings</span>
  <span class="hljs-string">setting</span> { <span class="hljs-comment"># setting { ... }: This block is configuring a setting for the Elastic Beanstalk environment.</span>

    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"VPCId"</span>
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:ec2:vpc"</span> <span class="hljs-comment"># We are putting the Beanstalk inside the VPC </span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">module.vpc.vpc_id</span>

  }

  <span class="hljs-comment"># Creating Elastic Beanstalk Role  https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-autoscalinglaunchconfiguration</span>
  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:autoscaling:launchconfiguration"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"IamInstanceProfile"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"aws-elasticbeanstalk-ec2-role"</span>
  }

  <span class="hljs-comment"># Associating public IP address https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-ec2vpc</span>
  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:ec2:vpc"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"AssociatePublicIpAddress"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"false"</span>
  }

  <span class="hljs-comment"># Now we are putting the EC2 instances in private subnet but our LoabBalancers will be in Public subnet</span>
  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:ec2:vpc"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"Subnets"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">join("</span>,<span class="hljs-string">", [module.vpc.private_subnets[0]], [module.vpc.private_subnets[1]], [module.vpc.private_subnets[2]]) # join is a functuon that can join lists into a string
    # Here we are joining the subnets in a list separated by comma
    # There are lot of built-in terraform functions available.
  }

  # EC2 instances in private subnet but our LoabBalancers will be in Public subnet

  setting {
    namespace = "</span><span class="hljs-string">aws:ec2:vpc"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"ELBSubnets"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">join("</span>,<span class="hljs-string">", [module.vpc.public_subnets[0]], [module.vpc.public_subnets[1]], [module.vpc.public_subnets[2]]) # this entire thing will be converted to a string
  }

  # Defining the instance type that will be lunched by Autoscalling group
  setting {
    namespace = "</span><span class="hljs-string">aws:autoscaling:launchconfiguration"</span> <span class="hljs-comment"># this is to create lunch configuration that defines the template of EC2 instance</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"InstanceType"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"t2.micro"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:autoscaling:launchconfiguration"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"EC2KeyName"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">aws_key_pair.Vprofile-key.key_name</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:autoscaling:asg"</span> <span class="hljs-comment"># this is to create auto scalling group</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"Availability Zones"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"Any 3"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:autoscaling:asg"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"MinSize"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"1"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:autoscaling:asg"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"MaxSize"</span> <span class="hljs-comment"># defining maximum EC2 instance that can be launched</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"8"</span>
  }

  <span class="hljs-comment"># Setting Environment variables</span>
  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:elasticbeanstalk:application:environment"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"environment"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"Prod"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:elasticbeanstalk:application:environment"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"LOGGING_APPENDER"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"GRAYLOG"</span>
  }

  <span class="hljs-comment"># Setting for monitoring the health of EC2 instance</span>
  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"elasticbeanstalk:healthreporting:system"</span> <span class="hljs-comment"># https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elasticbeanstalkhealthreporting</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"SystemType"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"enhanced"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:autoscaling:updatepolicy:rollingupdate"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"RollingUpdateEnabled"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"true"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:autoscaling:updatepolicy:rollingupdate"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"RollingUpdateType"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"Health"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:autoscaling:updatepolicy:rollingupdate"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"MaxBatchSize"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"1"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:elb:loadbalancer"</span> <span class="hljs-comment"># https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elbloadbalancer</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"CrossZone"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"true"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:elasticbeanstalk:environment:process:default"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"StickinessEnabled"</span> <span class="hljs-comment">#This option is only applicable to environments with an application load balancer.</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"true"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:elasticbeanstalk:command"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"BatchSizeType"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"Fixed"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:elasticbeanstalk:command"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"BatchSize"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"1"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:elasticbeanstalk:command"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"DeploymentPolicy"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">"Rolling"</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:autoscaling:launchconfiguration"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"SecurityGroups"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">aws_security_group.vpro-prod-sg.id</span>
  }

  <span class="hljs-string">setting</span> {
    <span class="hljs-string">namespace</span> <span class="hljs-string">=</span> <span class="hljs-string">"aws:elbv2:loadbalancer"</span>
    <span class="hljs-string">name</span>      <span class="hljs-string">=</span> <span class="hljs-string">"SecurityGroups"</span>
    <span class="hljs-string">value</span>     <span class="hljs-string">=</span> <span class="hljs-string">aws_security_group.vpro-beanstalk-elb-sg.id</span>
  }

  <span class="hljs-comment"># So now we are done with the settings but, the beanstalk depends on the security groups where it can attach them. So the priority is to create the Secruity group first.</span>
  <span class="hljs-comment">#  All the resources that are placed in depends_on will be created first then only it will move to other resources.</span>

  <span class="hljs-string">depends_on</span> <span class="hljs-string">=</span> [<span class="hljs-string">aws_security_group.vpro-beanstalk-elb-sg</span>, <span class="hljs-string">aws_security_group.vpro-prod-sg</span>]

}
</code></pre>
<p>Here we have done all the configurations required to launch EC2 instances through Elastic Beanstalk.</p>
<h1 id="heading-bastion-host-setup-amp-db-initialization">Bastion Host Setup &amp; DB Initialization</h1>
<p>Now only two things are pending one is initializing the database, we have the SQL queries and we have to run them in the RDS instance which is in a <strong>private subnet,</strong> and secondly Bastion host setup which is nothing like a simple EC2 instance available in public instance.</p>
<p>To set up the database we have to run the SQL queries which is not possible directly as the RDS instance is in a private subnet and here comes the role of <strong>Bastion Host or Jump server.</strong></p>
<p>Once we launch the bastion host we can run the SQL file on the RDS instance from there itself.</p>
<h2 id="heading-bastion-host">Bastion Host</h2>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"vpro-bastion"</span> {

    <span class="hljs-string">ami</span> <span class="hljs-string">=</span> <span class="hljs-string">lookup(var.AMIs</span>, <span class="hljs-string">var.AWS_REGION)</span>  <span class="hljs-comment"># this lookup function will look for the map variable "AMI" and in that it will look for a key region name</span>
                                            <span class="hljs-comment"># var.aws_REGION will return the region name and accrodingly it will search for the AMI</span>
    <span class="hljs-string">instance_type</span> <span class="hljs-string">=</span> <span class="hljs-string">"t2.micro"</span>
    <span class="hljs-string">key_name</span> <span class="hljs-string">=</span> <span class="hljs-string">aws_key_pair.Vprofile-key.key_name</span>

    <span class="hljs-string">subnet_id</span> <span class="hljs-string">=</span> <span class="hljs-string">module.vpc.public_subnets</span>[<span class="hljs-number">0</span>]        <span class="hljs-comment"># Bastion host will be created in the "first public subnet of VPC"</span>
    <span class="hljs-string">count</span> <span class="hljs-string">=</span> <span class="hljs-string">var.INSTANCE_COUNT</span>
    <span class="hljs-string">vpc_security_group_ids</span> <span class="hljs-string">=</span> [<span class="hljs-string">aws_security_group.vpro-bastionHost-sg.id</span>]
    <span class="hljs-string">tags</span> <span class="hljs-string">=</span> {
        <span class="hljs-string">name</span> <span class="hljs-string">=</span> <span class="hljs-string">"vpro-bastion host"</span>
    }

    <span class="hljs-comment"># file provisioned is used to send the template file or any othe type of file to the server(Here bastion host)</span>

    <span class="hljs-string">provisioner</span> <span class="hljs-string">"file"</span> {
      <span class="hljs-string">content</span> <span class="hljs-string">=</span> <span class="hljs-string">templatefile("db-deploy.tmpl"</span>, { <span class="hljs-string">rds-endpoint</span> <span class="hljs-string">=</span> <span class="hljs-string">aws_db_instance.vpro-rds.address</span>, <span class="hljs-string">dbuser</span> <span class="hljs-string">=</span> <span class="hljs-string">var.DB_USER</span>, <span class="hljs-string">dbpass</span> <span class="hljs-string">=</span> <span class="hljs-string">var.DB_PASSWORD</span> }<span class="hljs-string">)</span>
      <span class="hljs-comment"># we have already created the db-deploy.tmpl which contains the shell script. Now using file provsioned and templatefile() function we are sending the file to bastion host</span>
      <span class="hljs-comment"># Here templatefile is used because the file that we are sending is not a normal text file, its a template file abd it also requires some environment variable to be passed at the run time</span>


      <span class="hljs-string">destination</span> <span class="hljs-string">=</span> <span class="hljs-string">"/tmp/vprofile-dbdeploy.sh"</span>  <span class="hljs-comment"># we are renaming the db-deoliy file as vprofile-dbdeoloy and placing it in the tmp directory</span>

    }

    <span class="hljs-comment"># remote-exec is used to execute an executable file remotely in the server(Here bastion host)</span>

    <span class="hljs-string">provisioner</span> <span class="hljs-string">"remote-exec"</span> {
        <span class="hljs-string">inline</span> <span class="hljs-string">=</span> [ 
            <span class="hljs-string">"chmod +x /tmp/vprofile-dbdeploy.sh"</span>,       <span class="hljs-comment"># Giving the executable permission</span>
            <span class="hljs-string">"sudo /tmp/vprofile-dbdeploy.sh"</span>            <span class="hljs-comment"># this will execute the db-deploy template</span>
         ]

    }

    <span class="hljs-comment"># Letting terraform know about on which server it has to perform the above actions</span>

    <span class="hljs-string">connection</span> {
      <span class="hljs-string">user</span> <span class="hljs-string">=</span> <span class="hljs-string">var.USERNAME</span>
      <span class="hljs-string">private_key</span> <span class="hljs-string">=</span> <span class="hljs-string">file(var.PRIVATE_KEY_PATH)</span>
      <span class="hljs-string">host</span> <span class="hljs-string">=</span> <span class="hljs-string">self.public_ip</span> <span class="hljs-comment"># Here self means the server that is launched itself( here Bastion Host )</span>
    }

    <span class="hljs-comment"># RDS instance has to be ready before the SQL schema is run. Hence we are using depends_on to create a dependancy on the aws_db_instance.vpro-rds where the SQL schema will run</span>

    <span class="hljs-string">depends_on</span> <span class="hljs-string">=</span> [ <span class="hljs-string">aws_db_instance.vpro-rds</span> ]

}
</code></pre>
<p>With this we can create the bastion host however, we also have to provision the RDS instance which is already created and empty as of now via this bastion host.</p>
<p>Now the Bastion host is also empty and it has no information about the RDS. So, we'll write a shell script that will fetch the RDS instance endpoint, username and password.</p>
<p>Terraform already maintains the state of the infrastructure we just need to extract the RDS endpoint from there.</p>
<p>Now to extract the file we need a <strong>template.</strong> There is a <code>templatefile(path, vars )</code> function in Terraform using which we can pass any text file. We'll send a file to the Bastion Host using this <code>templatefile()</code> function.</p>
<h2 id="heading-db-deploytmpl"><code>db-deploy.tmpl</code></h2>
<p>Here is the shell script that will import the SQL file from the repository, will execute it on the RDS instance via its RDS Endpoint</p>
<pre><code class="lang-yaml"><span class="hljs-string">sudo</span> <span class="hljs-string">apt</span> <span class="hljs-string">update</span>
<span class="hljs-string">sudo</span> <span class="hljs-string">apt</span> <span class="hljs-string">install</span> <span class="hljs-string">git</span> <span class="hljs-string">mysql-client</span> <span class="hljs-string">-y</span>
<span class="hljs-string">git</span> <span class="hljs-string">clone</span> <span class="hljs-string">-b</span> <span class="hljs-string">vp-rem</span> <span class="hljs-string">https://github.com/ritesh-kumar-nayak/vprofile-project-forked.git</span>   <span class="hljs-comment"># Cloning the source code to the home directory of ubuntu user</span>
<span class="hljs-string">mysql</span> <span class="hljs-string">-h</span> <span class="hljs-string">${rds-endpoint}</span> <span class="hljs-string">-u</span> <span class="hljs-string">${dbuser}</span> <span class="hljs-string">--password=${dbpass}</span> <span class="hljs-string">accounts</span> <span class="hljs-string">&lt;</span> <span class="hljs-string">/home/ubuntu/vprofile-project-forked/src/main/resources/db_backup.sql</span>   <span class="hljs-comment"># this will import the backup.sql file from the project source code</span>
</code></pre>
<p>This shell script is also being passed using provisioners such as <code>file</code> and <code>remote-exec</code></p>
<h1 id="heading-cidr-block">CIDR Block</h1>
<p>CIDR, which stands for Classless Inter-Domain Routing, is a standard syntax for specifying IP addresses and their associated routing prefix. CIDR notation allows network administrators to specify IP address ranges more flexibly than the older system of traditional IP address classes (Class A, Class B, and Class C networks).</p>
<p>In CIDR notation:</p>
<ul>
<li><p>An IP address is represented as a series of four groups of numbers, each separated by a period (e.g., 192.168.1.1).</p>
</li>
<li><p>A routing prefix is specified by appending a forward slash ("/") and a number, indicating how many bits of the IP address are fixed, leaving the remaining bits for the network to assign to individual devices.</p>
</li>
</ul>
<p>For example, in the CIDR block <code>192.168.1.0/24</code>:</p>
<ul>
<li><p><code>192.168.1.0</code> is the base IP address.</p>
</li>
<li><p><code>/24</code> indicates that the first 24 bits are fixed as the network portion of the address, leaving 8 bits for device addresses (2^8 = 256 addresses).</p>
</li>
</ul>
<p>CIDR notation allows for more efficient use of IP addresses and enables the creation of subnets within larger networks. Here are a few common CIDR block examples:</p>
<ul>
<li><p><code>/32</code>: Single IP address (e.g., <code>192.168.1.1/32</code>).</p>
</li>
<li><p><code>/24</code>: A typical subnet in a local network, allowing for 256 addresses (e.g., <code>192.168.1.0/24</code>).</p>
</li>
<li><p><code>/16</code>: A larger network, allowing for 65,536 addresses (e.g., <code>192.168.0.0/16</code>).</p>
</li>
<li><p><code>/8</code>: An even larger network, allowing for over 16 million addresses (e.g., <code>10.0.0.0/8</code>).</p>
</li>
</ul>
<p>When creating subnets or defining IP address ranges, you typically use CIDR notation to specify the desired range. For example, in AWS when creating a VPC, you would define the CIDR block for the entire VPC (e.g., <code>10.0.0.0/16</code>) and then create subnets within that VPC using CIDR notation (e.g., <code>10.0.1.0/24</code>).</p>
<p>Remember that CIDR blocks must be chosen carefully to avoid overlaps with existing networks. Planning your IP address space using CIDR notation is crucial to ensure efficient use of addresses and prevent conflicts in your network infrastructure.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>With this, we can establish the complete infrastructure for a 3-tier application. And deploy the artifact to the beanstalk.</p>
]]></content:encoded></item></channel></rss>