Featured

Import Powershell object data into SQL

Along the journey with Powershell, you've undoubtedly had a few issues manipulating data, all of which was loaded in memory and in the current session.  But, how can we store the output so we can visualize it?  Easy, stuff it into SQL.

#SQL Server Information 
$SQL_Server = Read-Host "SQL Server?"
$SQL_Database = Read-Host "SQL Database?"
$SQL_Table = Read-Host "SQL Table?"

#Get Count of All Sessions
Connect-AzAccount
$total = 0
$allPools = get-azwvdhostpool -resourcegroupname cloud-azure-na-avd-shared-rg | select Name
foreach ($aPool in $allPools) {
try {
$count = (Get-AzWvdSessionHost -ResourceGroupName cloud-azure-na-avd-shared-rg -HostPoolName $aPool.Name).count
write-host $aPool.Name, $count
$total += $count

#Insert data
$insert_data = "INSERT INTO AVD_Session_Counts ([ColumnName1], [ColumnName2], [ColumnName3]) VALUES ('$(Get-Date -Format 'yyyy/MM/dd HH:mm')','$($aPool.Name)','$([int]$count)');"
Invoke-Sqlcmd -ServerInstance $SQL_Server -Database $SQL_Database -Query $insert_data
}
catch {
Write-host "Whoops"

throw
}
}
#$list > $null
Write-host "Total Count: $total"
Featured

Discover if VM’s have backups configured

Okay, so you can go to each VM blade if you want, or you can cross reference from what already exists in the Recovery Services Vault, but what if you have multiple tenants? It can get ugly, fast. So, here, let me FTFY with a script. It spins through all VMs, discovers if there's a backup configured, it'll grab the info about last backup, and add that to the report. If it doesn't have a backup, you'll have the option to configure them for all of those VMs missing a backup, or a single one, depending on your choice.

param
( 
    [parameter(Mandatory=$true)]
    [string] $subscriptionId
)
Connect-AzAccount
# Set Azure context
$context = Set-AzContext -SubscriptionId $subscriptionId
#Collecting Azure virtual machines Information
Write-Host "Collecting Azure virtual machine Information" -BackgroundColor DarkBlue
$vms = Get-AzVM
#Collecting All Azure backup recovery vaults Information
Write-Host "Collecting all Backup Recovery Vault information" -BackgroundColor DarkBlue
$backupVaults = Get-AzRecoveryServicesVault
$list = [System.Collections.ArrayList]::new()
$vmBackupReport = [System.Collections.ArrayList]::new()
    foreach ($vm in $vms) 
        {
            $recoveryVaultInfo = Get-AzRecoveryServicesBackupStatus -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Type 'AzureVM'
            if ($recoveryVaultInfo.BackedUp -eq $true)
                {
                    Write-Host "$($vm.Name) - BackedUp : Yes" -BackgroundColor DarkGreen
                    #Backup Recovery Vault Information
                    $vmBackupVault = $backupVaults | Where-Object {$_.ID -eq $recoveryVaultInfo.VaultId}
                    #Backup recovery Vault policy Information
                    $container = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVM -VaultId $vmBackupVault.ID -FriendlyName $vm.Name
                    $backupItem = Get-AzRecoveryServicesBackupItem -Container $container -WorkloadType AzureVM -VaultId $vmBackupVault.ID
                }
            else 
                {
                    Write-Host "$($vm.Name) - BackedUp : No" -BackgroundColor DarkRed
                    [void]$list.Add([PSCustomObject]@{
                    VM_Name = $vm.Name
                    VM_ResourceGroupName = $vm.ResourceGroupName
                    })
                    $vmBackupVault = $null
                    $container =  $null
                    $backupItem =  $null
                }     
[void]$vmBackupReport.Add([PSCustomObject]@{
VM_Name = $vm.Name
VM_Location = $vm.Location
VM_ResourceGroupName = $vm.ResourceGroupName
VM_BackedUp = $recoveryVaultInfo.BackedUp
VM_RecoveryVaultName =  $vmBackupVault.Name
VM_RecoveryVaultPolicy = $backupItem.ProtectionPolicyName
VM_BackupHealthStatus = $backupItem.HealthStatus
VM_BackupProtectionStatus = $backupItem.ProtectionStatus
VM_LastBackupStatus = $backupItem.LastBackupStatus
VM_LastBackupTime = $backupItem.LastBackupTime
VM_BackupDeleteState = $backupItem.DeleteState
VM_BackupLatestRecoveryPoint = $backupItem.LatestRecoveryPoint
VM_Id = $vm.Id
RecoveryVault_ResourceGroupName = $vmBackupVault.ResourceGroupName
RecoveryVault_Location = $vmBackupVault.Location
RecoveryVault_SubscriptionId = $vmBackupVault.ID
})
}
Do{
$choices =@(
	("&E - Exit"),
	("&1 - Export vmBackupReport to CSV"),
	("&2 - View and Assign BU Policy to all VMs"),
	("&3 - View and Assign BU Policy to a single VM")
)
$choicedesc = New-Object System.Collections.ObjectModel.Collection[System.Management.Automation.Host.ChoiceDescription]
for($i=0; $i -lt $choices.length; $i++){
	$choicedesc.Add((New-Object System.Management.Automation.Host.ChoiceDescription $choices[$i] ) ) }
[int]$defchoice = 0
$action = $host.ui.PromptForChoice($title, $prompt, $choices, $defchoice)
Switch ($action)
{
 0 {
		Write-Output "Exited Function."
        Exit
	}
 1 {
		$vmBackupReport | Export-Csv -Path .\vmbackupstatus.csv
        Write-Host "Exported to .\vmbackupstatus.csv!" -ForegroundColor Magenta -BackgroundColor Black
        Exit
	}
 2 {
        $list | Out-String
        if ($list.VM_Name -eq $null)
        {
            Write-Host "Filtered VM List is empty" -ForegroundColor Yellow -BackgroundColor Black
            Write-Host "There are no VM's that need Backup Policy Assigned..." -ForegroundColor Yellow -BackgroundColor Black
            Write-Host ""
        }
        else
        {
            Get-AzRecoveryServicesVault -Name $backupVaults.Name[0] | Set-AzRecoveryServicesVaultContext
            $Pol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "DefaultPolicy"
            $Pol
            Write-Host "Assigning Backup Policy to all VMs" -BackgroundColor DarkBlue
            foreach ($vm in $list){
                $config = Enable-AzRecoveryServicesBackupProtection -Policy $Pol -Name "$($vm.VM_Name)" -ResourceGroupName "$($vm.VM_ResourceGroupName)" | Select-Object -Property "WorkloadName" 
                Write-Host "$($config.WorkloadName) has backup policy $($pol.Name) assigned!" -BackgroundColor -DarkGreen
            }
            Write-Host "Done assigning BU Policy to Resources!" -ForegroundColor Yellow -BackgroundColor Black
            Write-Host ""
        }
 	}
 3 {
        $list | Out-String
        $name = Read-Host -Prompt "Name of VM to be backed up:"
        # loop through VMs in the secondary List and ensure the user entry matches a machine name in the list.  If it doesn't, meaning it has a backup, why are you doing that with this tool?
        foreach ($vm in $list){
            if ($name -match $vm.VM_Name){
                Get-AzRecoveryServicesVault -Name $backupVaults.Name[0] | Set-AzRecoveryServicesVaultContext
                $Pol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "DefaultPolicy"
                $Pol
                $config = Enable-AzRecoveryServicesBackupProtection -Policy $Pol -Name "$($vm.VM_Name)" -ResourceGroupName "$($vm.VM_ResourceGroupName)" | Select-Object -Property "WorkloadName"
                Write-Host "$($config.WorkloadName) has backup policy $($pol.Name) assigned!" -BackgroundColor DarkGreen
            }
            else {
                Write-Host "Entry does not match any Names in Filtered VM List" -ForegroundColor Yellow -BackgroundColor Black
            }
        Write-Host "" -ForegroundColor Yellow -BackgroundColor Black
        }   
    }
}
$repeat = Read-Host "Repeat?"
}
While ($repeat -eq "Y")
Write-Host "EXITING... " -ForegroundColor Yellow -BackgroundColor Black
Write-Host ""
Disconnect-AzAccount > $null
Write-Host "ACCOUNT HAS BEEN DISCONNECTED" -ForegroundColor Yellow -BackgroundColor Black
#end
Featured

AVD Alerts in Terraform

Following up from my post here: https://seehad.tech/2021/08/26/add-robust-monitoring-of-azure-virtual-desktop-using-azure-monitor-alerts/ I've put these alerts into a Terraform module.  You can find the module here: https://github.com/chad-neal/avdtf-with-modules.


module rg {
  source = "../RG"
}
resource "azurerm_monitor_action_group" "email" {
  name                = "Email Desk"
  resource_group_name = module.rg.rg_name
  short_name          = "Email"
  email_receiver {
    name          = "Email"
    email_address = "Azure_Alerts@emaildomain.com"
    use_common_alert_schema = true
  }
}
resource "azurerm_monitor_activity_log_alert" "avd-service-health" {
  name                = "${var.client_name} - AVD Service Health"
  resource_group_name = module.rg.rg_name
  scopes              = [module.rg.rg_id]
  description         = "This alert will monitor AVD Service Health."
  criteria {
    category       = "ServiceHealth"
    service_health {
    events = [
      "Incident", 
      "ActionRequired", 
      "Security"
      ]
    locations = [
      "East US",
      "East US 2",
      "Global",
      "South Central US",
      "West US",
      "West US 2"
      ]
    services = ["Windows Virtual Desktop"]
  }
}
  action {
    action_group_id = azurerm_monitor_action_group.email.id
  }
}
resource "azurerm_monitor_scheduled_query_rules_alert" "avd-no-resources" {
  name                = "${var.client_name} - AVD 'No available resources'"
  location            = module.rg.rg_location
  resource_group_name = module.rg.rg_name
  data_source_id      = var.workspace_id
  description         = "This alert will monitor AVD for error 'No Available Resources'."
  action {
    action_group      = azurerm_monitor_action_group.email.id
  }
  enabled             = true
  severity            = 1
  frequency           = 15
  time_window         = 5
  query               = <<-QUERY
  WVDErrors
  | where CodeSymbolic == \"ConnectionFailedNoHealthyRdshAvailable\" and Message contains \"Could not find any SessionHost available in specified pool\"
QUERY
  trigger {
    operator          = "GreaterThan"
    threshold         = 20
  }
}
resource "azurerm_monitor_scheduled_query_rules_alert" "avd-host-mem-below-gb" {
  name                = "${var.client_name} - AVD Available Host Memory below 1GB"
  location            = module.rg.rg_location
  resource_group_name = module.rg.rg_name
  data_source_id      = var.workspace_id
  description         = "This alert will be triggered when Available Host Memory is less than 1GB."
  action {
    action_group      = azurerm_monitor_action_group.email.id
  }
  enabled             = true
  severity            = 2
  frequency           = 15
  time_window         = 5
  query               = <<-QUERY
  Perf
  | where ObjectName == \"Memory\"
  | where CounterName == \"Available Mbytes\"
  | where CounterValue <= 1024
QUERY
  trigger {
    operator          = "GreaterThanOrEqual"
    threshold         = 1
  }
}
resource "azurerm_monitor_scheduled_query_rules_alert" "avd-failed-connections" {
  name                = "${var.client_name} - AVD Failed Connections"
  location            = module.rg.rg_location
  resource_group_name = module.rg.rg_name
  data_source_id      = var.workspace_id
  description         = "This alert will be triggered when there's more than 10 failed AVD connections in 15 minutes."
  action {
    action_group      = azurerm_monitor_action_group.email.id
  }
  enabled             = true
  severity            = 2
  frequency           = 5
  time_window         = 15
  query               = <<-QUERY
WVDConnections
  | where State =~ \"Started\" and Type =~\"WVDConnections\"
  | extend Multi=split(_ResourceId, \"/\") | extend CState=iff(SessionHostOSVersion==\"<>\",\"Failure\",\"Success\")
  | where CState =~\"Failure\"
  | order by TimeGenerated desc
  | where State =~ \"Started\" | extend Multi=split(_ResourceId, \"/\")
  | project ResourceAlias, ResourceGroup=Multi[4], HostPool=Multi[8], SessionHostName, UserName, CState=iff(SessionHostOSVersion==\"<>\",\"Failure\",\"Success\"), CorrelationId, TimeGenerated
  | join kind= leftouter (WVDErrors) on CorrelationId
  | extend DurationFromLogon=datetime_diff(\"Second\",TimeGenerated1,TimeGenerated)
  | project  TimeStamp=TimeGenerated, DurationFromLogon, UserName, ResourceAlias, SessionHost=SessionHostName, Source, CodeSymbolic, ErrorMessage=Message, ErrorCode=Code, ErrorSource=Source ,ServiceError, CorrelationId
  | order by TimeStamp desc
QUERY
  trigger {
    operator          = "GreaterThanOrEqual"
    threshold         = 10
  }
}
resource "azurerm_monitor_scheduled_query_rules_alert" "avd-fslogix-errors" {
  name                = "${var.client_name} - AVD FSLogix Errors"
  location            = module.rg.rg_location
  resource_group_name = module.rg.rg_name
  data_source_id      = var.workspace_id
  description         = "This alert will be triggered when there's more than 1 FSLogix Errors in 5 minutes."
  action {
    action_group      = azurerm_monitor_action_group.email.id
  }
  enabled             = true
  severity            = 2
  frequency           = 5
  time_window         = 5
  query               = <<-QUERY
  Event 
  | where EventID == "26" and isnotnull(Message) 
  | where Message != "" 
  | where UserName != "NT AUTHORITY\\SYSTEM" 
  | order by TimeGenerated desc
QUERY
  trigger {
    operator          = "GreaterThanOrEqual"
    threshold         = 1
  }
}
resource "azurerm_monitor_scheduled_query_rules_alert" "avd-out-of-memory" {
  name                = "${var.client_name} - AVD Host Out of Memory Errors"
  location            = module.rg.rg_location
  resource_group_name = module.rg.rg_name
  data_source_id      = var.workspace_id
  description         = "This alert will be triggered when there's more than 20 Out of Memory Errors in 30 minutes."
  action {
    action_group      = azurerm_monitor_action_group.email.id
  }
  enabled             = true
  severity            = 1
  frequency           = 5
  time_window         = 30
  query               = <<-QUERY
  WVDErrors
  | where CodeSymbolic == \"OutOfMemory\" and Message contains \"The user was disconnected because the session host memory was exhausted.\"
QUERY
  trigger {
    operator          = "GreaterThanOrEqual"
    threshold         = 20
  }
}
resource "azurerm_monitor_scheduled_query_rules_alert" "avd-high-cpu" {
  name                = "${var.client_name} - AVD Host % Proc Time Greater Than 99"
  location            = module.rg.rg_location
  resource_group_name = module.rg.rg_name
  data_source_id      = var.workspace_id
  description         = "This alert will be triggered when there's more than 50 High CPU alerts in 10 minutes."
  action {
    action_group      = azurerm_monitor_action_group.email.id
  }
  enabled             = true
  severity            = 1
  frequency           = 5
  time_window         = 10
  query               = <<-QUERY
  Perf   
  | where CounterName == "% Processor Time"
  | where InstanceName == "_Total"
  | where CounterValue >= 99
QUERY
  trigger {
    operator          = "GreaterThanOrEqual"
    threshold         = 50
  }
}
resource "azurerm_monitor_metric_alert" "avd-pct-proc-pagefile" {
  name                = "${var.client_name} - AVD Pct Processor committed bytes utilization"
  resource_group_name = module.rg.rg_name
  scopes              = var.workspace_id
  description         = "Action will be triggered when Average % of Committed Bytes in Use is greater than 80."
  enabled             = false
  frequency           = "PT5M"
  window_size         = "PT5M"
  severity            = 2
  criteria {
    metric_namespace = "Microsoft.OperationalInsights/workspaces"
    metric_name      = "Average_% Committed Bytes In Use"
    aggregation      = "Maximum"
    operator         = "GreaterThanOrEqual"
    threshold        = 80
    dimension {
      name     = "ApiName"
      operator = "Include"
      values   = ["*"]
    }
  }
  action {
    action_group_id = azurerm_monitor_action_group.email.id
  }
}
resource "azurerm_monitor_metric_alert" "avd-sa-capacity" {
  name                  = "${var.client_name} - AVD Storage Account Capacity Alert"
  resource_group_name   = module.rg.rg_name
  scopes                = var.storageacct_id
  description           = "Action will be triggered when Storage Account Capacity is close to full."
  enabled               = true
  frequency             = "PT5M"
  window_size           = "PT1H"
  severity              = 1
  target_resource_type  = "Microsoft.Storage/storageAccounts/fileServices"
  target_resource_location = var.storageacct_region
  criteria {
    metric_namespace = "microsoft.storage/storageaccounts/fileservices"
    metric_name      = "FileCapacity"
    aggregation      = "Average"
    operator         = "GreaterThanOrEqual"
    threshold        = var.storageacct_threshold_bytes
    dimension {
      name     = "FileShare"
      operator = "Include"
      values   = ["fshare"]
    }
  }
  action {
    action_group_id = azurerm_monitor_action_group.email.id
  }
}

Windows Virtual Desktop (WVD) 2019 Assessment questions

As part of achieving the Microsoft WVD/AVD Advanced Specialty partner level, it is required that your organization pass the 2019 WVD Assessment.

Find corrupt VHDX profiles in Azure Files

Edit (05/24/2022): We finally discovered the cause of all of our Hosts daily stress and alerts.  Including solving this problem.  We switched to using Breadth-first load balancing from Depth-first.  This stopped piling users onto a server before moving to another one, and therefore steadily rose processing and RAM usage instead of spiking it, reducing all kinds of issues.


If you haven't had the pleasure of dealing with this, be thankful. There is some set of parameters which, when met, corrupts a user's mounted VHDX, causing them to receive messages from the OS that their disk needs to be repaired, and/or they're logged in with a temp profile. Once detected, and the user logs off and back on again, FSLogix will create a new VHDX, put it in the same directory, rename the original VHDX and append "CORRUPT" to the beginning of the filename. If you can't tell, this is bad... mmmkay? If you don't have Onedrive or some other backup enabled, the user just lost everything saved to their profile.


I have gone round and round with Microsoft Support about this problem. The conclusion of which is that when an AVD host is heavily utilized to the point of throwing error messages related to CPU or RAM usage/exhaustion, this CAN cause corruption. What the actual set of parameters is that causes this corruption is unknown or not fully understood. Microsoft's recommendation is to add more host resources so it doesn't get to the point of CPU/RAM exhaustion. Fine, fair point, but still, c'mon guys... You own AVD/FSLogix, which means this renaming logic is coded somewhere, and you don't know either?! Doubtful.


Anyways, corrupting profiles is one problem, but what about all these orphaned disks that are lying around, taking up space, and literally can't be fixed, can't be mounted somewhere else, can't be used at all. In some of our deployments, this equalled about 2-3TB of used space. At about $120/100GB/month of provisioned space, this could not be overlooked. So, I took my other script from here: https://seehad.tech/2021/08/24/searching_azure_file_share_to_match_string/ and modified it to search for CORRUPT profiles.

Use PowerShell to interact with REST APIs

API's are quickly becoming foundational for every SaaS product out there. They provide a gateway into interacting with the product without having to go through the exercise of a full integration with the product. You can use all kinds of methods and code languages to interact with APIs. This is just how PowerShell does it.


param(
    [Parameter(Mandatory=$true)]
    [string] $accountEndpoint = "",
    
    [Parameter(Mandatory=$true)]
    [string] $client_id = "",
    
    [Parameter(Mandatory=$true)]
    [string] $client_secret = ""
)
$DateStamp = get-date -uformat "%Y-%m-%d@%H-%M-%S"

$token = Invoke-RestMethod -Method Post -Uri "https://$($accountEndpoint)/auth/connect/token" `
    -Body @{
        grant_type="client_credentials";
        client_id=$client_id;
        client_secret=$client_secret;
        scope="api"
    }

Invoke-RestMethod -Method Get -Uri "https://api.cloudcheckr.com/api/best_practice.json/get_best_practices_v3?access_key=bearer $($token.access_token)&use_account=All%20Azure%20Accounts" | ConvertTo-Json | Out-File ".\data\azure_best_practice_checks_$($DateStamp).json"


Note: Invoke-RestMethod also assumes the output is converted from JSON into PowerShell objects, which is why I needed to convert it back. Invoke-WebRequest can also be used and is better for dealing with HTML results.


This example is to get Best Practice Checks available from Cloudcheckr. Cloudcheckr is a tool used to scan an Azure tenant, read all kinds of information about it, and display that information without having to login to the Azure Portal itself. It provides insight into and checks to ensure Best Practices are followed for things like, Network Security Groups having all inbound ports enabled-which is dumb, don't do dumb shit. It also scans VM's usage properties and offers suggestions for cost savings by reducing a VM's size or the possibility of combining workloads from multiple "idle" VMs. There are other tools out there that do this, like Flexera, and vCommander. These fall into the category of Cloud Management Platforms, and are a layer on top of Cloud resources that orchestrate, but allow a company like a Managed Services Provider to give access to Customer business units without having to onboard them directly into the native cloud environment.

Getting a job in Devops (or Tech) or a new one

So you want to get a job in Devops.  Or maybe the first step is that being a question...?


Devops is a new tech buzzword.  It's the combination of code, containerization, infrastructure and the pursuit of automation of these resources.  Getting a job in Devops means you understand lots of aspects of modern computing, but it's far from an extension of your typical OS or desktop experience.


Typically, you start out using the console/GUI to do things.  In Windows Server, that's similar to a Windows desktop experience.  You have all the familiar things like the Start button and Wizards for installing/configuring applications.  In Linux, this is more abstracted by using Bash, or maybe you're using Gnome or something else, but primarily using a Terminal to configure and execute.  Abstracted further, if you understand virtualization, maybe you're using VMware, or Hyper-V or public Cloud infrastructure, so you're mounting disks from a serial console, but in any way you use computing, you're usually starting out using a console. 


Now do it using code.  PowerShell for Windows, Chef/Puppet/Ansible for Linux.  Use images in Cloud, Cloudformation in AWS, AzureRM/JSON in Azure.  Whatever you're doing by button-fucking, do it with code, and now you're on your way to being a Devops Engineer.  Pass some certification tests.  This doesn't mean you know everything.  In fact, it means you know the basics.  Certs represent a baseline of understanding, much like getting a degree in College.  They represent a commitment to a subject or subjects, but they are far from being as valuable as real world experience.  They represent your desire to learn so much about a subject, that a manufacturer of that product says, "Yup, this person knows some shit."


Get a job in tech.  You may have to start out doing shit you don't like, like Desktop Support.  I started out as a Helpdesk tech in a local municipality.  It sucked, but I showed initiative at work, and I sponged up as much information as possible.  Within a year, I was promoted to a Tier 2 position.  I kept learning more about enterprise networking and routing, virtualization in a datacenter environment using VMware, physical datacenter servers, physical SANs and related media.  Within another year, I was promoted to a System Admin(Tier 3).  This is where I really cut my teeth with private cloud infrastructure.  I stood up and replaced servers, I was responsible for data backups, I managed and maintained Active Directory, Exchange, File Server Resource Manager, and Systems Center.  I implemented new changes that brought value, like Quotas on SMB shares, and disaster recovery by utilizing high availability and fault tolerant architectures.  As soon as I vested, and I decided I was ready for a change, I found a job at a Managed Services Provider.


These are companies that sell IT as a service.  Here, I was exposed to hundreds of environments.  It teaches you about how smart, capable people fuck things up. They teach you about leveraging best practices. They teach you why you never want to do things certain ways. And you'll spend countless late nights deciphering someone's bad decision. You'll probably have to take out some garbage out for awhile.


Then I shifted to Public Cloud. I got all my certifications-9 in Azure, 2 in AWS, 1 in GCP. I spent a year and half, heads down, studying, learning, passing tests. I proved to people that I knew the basics. When you combine that with my actual experience, demonstrable skills in understanding how computing, networking, systems, and virtualization work, I had 10x more hardened experience than anyone else. Right now, the market is flooded with two options, people who never touched traditional infrastructure, and those that think they know everything already. Being in the middle is a rare commodity. Most people coming into Devops come from a software engineering background where they learned how to do some cloud because they needed it for software deployment reasons. There is a real gap for people who learned with traditional infrastructure, Windows, VMware or Hyper-V, and then move into Cloud.


Along the way, I picked up PowerShell. I use it everyday. I used it to manage Server, Exchange, Active Directory, if I could do it in GUI, I wanted to do it with PowerShell. I work everyday to eliminate toil. I don't like button-fucking. If it takes 2-3x as long to do it with code, sure it sucks now, but it's going to save me hours on the other end when I'm using it. I promise and I deliver.  The only thing consistent in IT is that everything is going to change.  So, roll with the punches.


I updated my LinkedIn.  I set the flag that says I'm "Open to work".  I put in "Cloud Engineer, Devops Engineer, Cloud Support" as the roles I wanted to hear about.  Recruiters started to hit me up.  Even if I didn't like the opportunity, I still responded and said no thank you.  This increased my "score" rank in Recruiter searches.  No matter what, I don't like my time being wasted and certain requirements, like a high enough salary, or lack thereof, are a deal-breaker.  Don't be afraid to advocate for yourself, to say no, and tell recruiters what you need in order to make a move.  In my opinion, most have appreciated the transparency. 


I'm never not interviewing. It's the way I always sound confident. When you've answered a similar question 50 times, you sound like you know the answer.  Finally, If you like this kind of stuff, and want to take it to the next level, learn about CI/CD like dev cycles/methods and pipelines. Github, Gitlab, Bitbucket, Azure Devops. ITIL processes. Cloud Adoption Framework. Well-Architected framework. Sign up for Azure free tier, find a Terraform course that walks your through step-by-step. I use Whizlabs.com for all my cert training. 


Learn Terraform and you'll become one of the highest paid Engineers on the planet.  Everything you just went through, years of learning about virtualization, using and running Windows or Linux servers, understanding enterprise networking and routing, executing PowerShell or Bash, all becomes useful when you start to use Terraform.  Infrastructure as code is the foundational building block of Tier 0 Devops and automation.  Terraform isn't the only thing that does this.  CloudFormation, AzureRM templates/JSON, etc are the same, but different.  Building resources with code means that whatever you can do with it is repeatable, the configuration is uniform, and reliable.  It can be used for auditing, to understand what's changed, to manage configuration drift across business units.  Customer A wants a thing, bam, execute some code, and save your business hundreds of hours of configuration and Mr. Legacy-Engineer-who-needs-to-retire's opinion about how a thing should work or be named.  It's no longer relevant.  There's a generally understood(best practice) way to implement everything in public cloud and you better believe that it fucking matters.  Provide instant value.  Value to your business, copy the code, sync it to a new repo, make some changes to the names, and bam, execute it for Customer B.  Instant value to the customer, time to market is negligible or less than one day.  Not sure you remember everything about how you turned on backups for Customer A?  Doesn't matter, it's in the code.


Your mileage may vary, but this formula is what worked for me.  Working somewhere for 20 years and getting a gold watch doesn't exist anymore.  The quickest way to being valued appropriately and doing something you want to do instead of something you have to do is to find someone else who will value you or what you want to do.  Take what I've written all over this site, it's free of charge, and really understand how it works.  Build on it, execute it, fuck it up, at some point, you'll get it right.  Reflect on what you did to get it to work.  Ask me for help.  I will help the shit out of you 🙂


You're going to do awesome.  Connect with me on LinkedIn: https://me.seehad.tech.

Terraform with Modules example for basic AVD(or any Azure) Environment

Edit(03/09/2022):  To anyone who might have been trying to use this, after receiving some usage feedback, I've made a ton of changes to turn this into something that actually works.  Officially v1.0.


I completed some TF code that should make lives easier.  It builds all the basics necessary for an AVD environment but it could really be used to build the basics of any Azure environment.  It's modularized, so the root main.tf file is used to provide different variables if needed, but there's a default provided for almost everything.


Find the public repo here: https://github.com/chad-neal/avdtf-with-modules.


To make use of this, clone the repo.  In the root main.tf you'll find variable declarations in module blocks.  These link to the same variables declared in each modules' variables.tf.  Specify them in the root to change what the defaults are set to.  Or don't, if you're happy with what I've done.  Comment out, or delete, the module blocks that you don't need/want to use.  "Terraform apply" is looking at the root main.tf, then reading each of the child modules' configurations to decide which resources to build.


Anywhere you find a declaration like module.rg.rg_name, for example, it links to the outputs.tf file stored with the module.  In this example, in ./Modules/RG/outputs.tf, there's an object named "rg_name".  The value of this object is coming right from the main.tf of ./modules/rg where I specify the configuration for the resource.  Keep in mind that you can concatenate names, use wildcards, and all kinds of other things in outputs.tf to meet your needs.


Enjoy!