Followers

Wednesday, 3 January 2024

11. MS-SQL server failover cluster setup step by step.

 

11. MS-SQL server failover cluster setup step by step.

Setting up a Microsoft SQL Server Failover Cluster involves several steps. Below is a high-level step-by-step guide for configuring a SQL Server Failover Cluster on Windows Server. Keep in mind that this is a simplified guide, and you should refer to the official documentation for your specific versions of SQL Server and Windows Server for more detailed and up-to-date instructions.

Prerequisites:

At least two servers running Windows Server with Failover Clustering feature installed.

Shared storage accessible by all cluster nodes.

SQL Server installation media or installation files.

Step 1: Prepare the Environment

Install Windows Failover Clustering:

Install the Failover Clustering feature on each server.

Configure network settings and ensure all nodes can communicate with each other.

Configure Shared Storage:

Set up shared storage that all cluster nodes can access. This is typically done using a SAN (Storage Area Network) or other shared storage solution.

Step 2: Install SQL Server Failover Cluster Instance

Install SQL Server:

Run the SQL Server setup on the first node.

Choose "New SQL Server failover cluster installation."

Follow the installation wizard, specifying instance name, SQL Server features, and configuration options.

Configure SQL Server Failover Cluster:

During installation, configure the SQL Server Failover Cluster instance.

Specify the SQL Server Network Name and IP Address.

Configure SQL Server instance settings such as authentication mode and SQL Server administrators.

Complete SQL Server Installation:

Complete the SQL Server installation on the first node.

Step 3: Add Additional Nodes to the Cluster

Install SQL Server:

Run the SQL Server setup on the additional nodes.

Choose "Add node to a SQL Server failover cluster."

Join the Cluster:

Specify the SQL Server instance to add the node to.

Follow the installation wizard to join the node to the existing SQL Server failover cluster.

Complete Installation:

Complete the SQL Server installation on the additional nodes.

Step 4: Validate the Failover Cluster

Run Cluster Validation:

Open the Failover Cluster Manager on one of the nodes.

Run the "Validate a Configuration" wizard to ensure the cluster is configured correctly.

Resolve any Issues:

Address any issues reported by the cluster validation tool.

Step 5: Test Failover

Failover Testing:

Use the Failover Cluster Manager to test failover by moving the SQL Server resources to another node.

Verify Functionality:

After a failover, verify that the SQL Server instance is accessible and operational.

Keep in mind that this is a basic guide, and you should refer to the official documentation for your specific versions of SQL Server and Windows Server for more detailed and up-to-date instructions. Additionally, it's crucial to thoroughly test your failover configuration in a controlled environment before deploying it in a production setting.

 

Monday, 18 December 2023

10.Retrieve Azure VM details by using PowerShell.

10.Retrieve Azure VM details by using PowerShell.


# Connect to Azure Account

Connect-AzAccount

 

# Create Report Array

$report = @()

# Record all the subscriptions in a Text file  

$SubscriptionIds = Get-Content -Path "c:\inputs\Subscriptions.txt" 

Foreach ($SubscriptionId in $SubscriptionIds) 

{

$reportName = "VM-Details.csv"

 

# Select the subscription  

Select-AzSubscription $subscriptionId

 

# Get all the VMs from the selected subscription

$vms = Get-AzVM

 

# Get all the Public IP Address

$publicIps = Get-AzPublicIpAddress

 

# Get all the Network Interfaces

$nics = Get-AzNetworkInterface | ?{ $_.VirtualMachine -NE $null} 

foreach ($nic in $nics) { 

    # Creating the Report Header we have taken maxium 5 disks but you can extend it based on your need

    $ReportDetails = "" | Select VmName, ResourceGroupName, Region, VmSize, VirtualNetwork, Subnet, PrivateIpAddress, OsType, PublicIPAddress, NicName, ApplicationSecurityGroup, OSDiskName,OSDiskCaching, OSDiskSize, DataDiskCount, DataDisk1Name, DataDisk1Size,DataDisk1Caching, DataDisk2Name, DataDisk2Size,DataDisk2Caching, DataDisk3Name, DataDisk3Size,DataDisk3Caching,  DataDisk4Name, DataDisk4Size,DataDisk4Caching, DataDisk5Name, DataDisk5Size,DataDisk5Caching

   #Get VM IDs

    $vm = $vms | ? -Property Id -eq $nic.VirtualMachine.id 

    foreach($publicIp in $publicIps) { 

        if($nic.IpConfigurations.id -eq $publicIp.ipconfiguration.Id) {

            $ReportDetails.PublicIPAddress = $publicIp.ipaddress

            } 

        } 

        $ReportDetails.OsType = $vm.StorageProfile.OsDisk.OsType 

        $ReportDetails.VMName = $vm.Name 

        $ReportDetails.ResourceGroupName = $vm.ResourceGroupName 

        $ReportDetails.Region = $vm.Location 

        $ReportDetails.VmSize = $vm.HardwareProfile.VmSize

        $ReportDetails.VirtualNetwork = $nic.IpConfigurations.subnet.Id.Split("/")[-3] 

        $ReportDetails.Subnet = $nic.IpConfigurations.subnet.Id.Split("/")[-1] 

        $ReportDetails.PrivateIpAddress = $nic.IpConfigurations.PrivateIpAddress 

        $ReportDetails.NicName = $nic.Name 

        $ReportDetails.ApplicationSecurityGroup = $nic.IpConfigurations.ApplicationSecurityGroups.Id 

        $ReportDetails.OSDiskName = $vm.StorageProfile.OsDisk.Name 

        $ReportDetails.OSDiskSize = $vm.StorageProfile.OsDisk.DiskSizeGB

        $ReportDetails.OSDiskCaching = $vm.StorageProfile.OsDisk.Caching

        $ReportDetails.DataDiskCount = $vm.StorageProfile.DataDisks.count

 

        if ($vm.StorageProfile.DataDisks.count -gt 0)

        {

     $disks= $vm.StorageProfile.DataDisks

     foreach($disk in $disks)

        {

        If ($disk.Lun -eq 0)

        {

       $ReportDetails.DataDisk1Name = $vm.StorageProfile.DataDisks[$disk.Lun].Name 

       $ReportDetails.DataDisk1Size =  $vm.StorageProfile.DataDisks[$disk.Lun].DiskSizeGB 

       $ReportDetails.DataDisk1Caching =  $vm.StorageProfile.DataDisks[$disk.Lun].Caching 

         

        }

        elseif($disk.Lun -eq 1)

        {

        $ReportDetails.DataDisk2Name = $vm.StorageProfile.DataDisks[$disk.Lun].Name 

       $ReportDetails.DataDisk2Size =  $vm.StorageProfile.DataDisks[$disk.Lun].DiskSizeGB 

       $ReportDetails.DataDisk2Caching =  $vm.StorageProfile.DataDisks[$disk.Lun].Caching 

        }

        elseif($disk.Lun -eq 2)

        {

        $ReportDetails.DataDisk3Name = $vm.StorageProfile.DataDisks[$disk.Lun].Name 

       $ReportDetails.DataDisk3Size =  $vm.StorageProfile.DataDisks[$disk.Lun].DiskSizeGB 

       $ReportDetails.DataDisk3Caching =  $vm.StorageProfile.DataDisks[$disk.Lun].Caching 

        }

        elseif($disk.Lun -eq 3)

        {

        $ReportDetails.DataDisk4Name = $vm.StorageProfile.DataDisks[$disk.Lun].Name 

       $ReportDetails.DataDisk4Size =  $vm.StorageProfile.DataDisks[$disk.Lun].DiskSizeGB 

       $ReportDetails.DataDisk4Caching =$vm.StorageProfile.DataDisks[$disk.Lun].Caching 

        }

        elseif($disk.Lun -eq 4)

        {

        $ReportDetails.DataDisk5Name = $vm.StorageProfile.DataDisks[$disk.Lun].Name 

       $ReportDetails.DataDisk5Size =  $vm.StorageProfile.DataDisks[$disk.Lun].DiskSizeGB 

       $ReportDetails.DataDisk5Caching =  $vm.StorageProfile.DataDisks[$disk.Lun].Caching 

        }

       }

        }

        $report+=$ReportDetails 

    } }

     

$report | ft VmName, ResourceGroupName, Region, VmSize, VirtualNetwork, Subnet, PrivateIpAddress, OsType, PublicIPAddress, NicName, ApplicationSecurityGroup, OSDiskName, OSDiskSize, DataDiskCount, DataDisk1Name, DataDisk1Size  

#Change the path based on your convenience

$report | Export-CSV "c:\outputs\$reportName"

Thursday, 14 December 2023

9.Create Azure Migrate project using windows PowerShell (Az Module) with user inputs.

# Install and import the Az module (if not already installed)

if (-not (Get-Module -Name Az -ListAvailable)) {

    Install-Module -Name Az -Force -AllowClobber -Scope CurrentUser

}

Import-Module Az

#use windows powershell to run this script

# Prompt the user for Azure Migrate project details

$azureMigrateProjectName = Read-Host -Prompt 'Enter Azure Migrate Project Name'

$subscriptionId = Read-Host -Prompt 'Enter Subscription ID'

$region = Read-Host -Prompt 'Enter Azure region for the Azure Migrate project'

$location = Read-Host -Prompt 'Enter location for the Azure Migrate project'

$resourceGroupName = Read-Host -Prompt 'Enter Resource Group Name for Azure Migrate project'


# Connect to Azure (Interactive login)

Connect-AzAccount -UseDeviceAuthentication


# Set the subscription context

Set-AzContext -Subscription $subscriptionId


# Create a new resource group

New-AzResourceGroup -Name $resourceGroupName -Location $region

# Create an Azure Migrate project

New-AzMigrateProject -Name $azureMigrateProjectName -ResourceGroupName $resourceGroupName -Location $location 

  Write-Host "Azure Migrate project '$azureMigrateProjectName' has been created successfully!" 

  #This script is tested on windows PowerShell 5.1 on Windows 11

 

Thursday, 7 December 2023

8.Create a Standard Load Balancer with Azure CLI

8. Create a Standard Load Balancer with Azure CLI

Start Cloud Shell

1.     Click the Cloud Shell icon (>_) in the upper right.

2.     Select Bash.

3.     Click Show advanced settings.

4.     In the Azure portal, click All resources to see which region your resources are located in.

5.     In the Cloud Shell section, change Cloud Shell region to the one your resources are in.

6.     For Storage account, select Use existing. If you have trouble, choose to create a new storage account with a unique name.

7.     For File share, select Create new and give it a name of "fileshare".

8.     Click Create storage.

Set Resource Variables in Cloud Shell

1.     In the Azure portal, click the listed resource group name.

2.     Copy it to your clipboard.

3.     In Cloud Shell, set the RG variable, replacing <RESOURCE_GROUP_NAME> with the name you just copied:

RG="<RESOURCE_GROUP_NAME>"

4.     Set the LOC variable, replacing <REGION> with the one your resources are in (e.g., eastus):

LOC="<REGION>"

5.     In the Azure portal, scroll in the resources list to find your network security group name (it will look something like nsg1-fhbmu) and copy its name.

6.     In Cloud Shell, set the NSG variable, replacing <NETWORK_SECURITY_GROUP_NAME> with the name of your network security group:

NSG="<NETWORK_SECURITY_GROUP_NAME>"

7.     In the Azure portal, scroll in the resources list to find your VNet (it will look something like lab-VNet1).

8.     Click it, and copy its name.

9.     In Cloud Shell, set the VNET variable, replacing <VNET_NAME> with the name of your VNet:

VNET="<VNET_NAME>"

10.  In the Azure portal, on the VNet page, click Subnets in the left-hand menu. We should see a subnet called default.

11.  In Cloud Shell, set the SNET variable:

SNET="default"

Create Network Interfaces for VMs

Create the network interface for the first VM:

1.     az network nic create \ --resource-group $RG \ --location $LOC \ --name myNicVM1 \ --vnet-name $VNET \ --subnet $SNET \ --network-security-group $NSG

It will take a minute or so for the command to complete.

2.     Create the network interface for the second VM:

az network nic create \ --resource-group $RG \ --location $LOC \ --name myNicVM2 \ --vnet-name $VNET \ --subnet $SNET \ --network-security-group $NSG

It will take a minute or so for the command to complete.

Create VMs

1.     In order to install packages on VMs during deployment, we'll create a cloud-init file:

code cloud-init.txt

2.     Enter the following into the file:

#cloud-config package_upgrade: true packages: - nginx - nodejs - npm write_files: - owner: www-data:www-data - path: /etc/nginx/sites-available/default content: | server { listen 80; location / { proxy_pass http://localhost:3000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } - owner: azureuser:azureuser - path: /home/azureuser/myapp/index.js content: | var express = require('express') var app = express() var os = require('os'); app.get('/', function (req, res) { res.send('Hello World from host ' + os.hostname() + '!') }) app.listen(3000, function () { console.log('Hello world app listening on port 3000!') }) runcmd: - service nginx restart - cd "/home/azureuser/myapp" - npm init - npm install express -y - nodejs index.js

3.     Click the three-dots icon in the upper right corner of the file, and select Save.

4.     Click the three-dots icon in the upper right corner of the file, and select Close Editor.

5.     Create the first VM:

az vm create \ --resource-group $RG \ --location $LOC \ --name myVM1 \ --nics myNicVM1 \ --image UbuntuLTS \ --generate-ssh-keys \ --custom-data cloud-init.txt \ --zone 1 \ --no-wait

6.     Create the second VM:

az vm create \ --resource-group $RG \ --location $LOC \ --name myVM2 \ --nics myNicVM2 \ --image UbuntuLTS \ --generate-ssh-keys \ --custom-data cloud-init.txt \ --zone 2 \ --no-wait

Create a Load Balancer

1.     Create the load balancer's public IP address:

az network public-ip create \ --resource-group $RG \ --location $LOC \ --name myPublicIP \ --sku Standard

2.     Create the load balancer:

az network lb create \ --resource-group $RG \ --location $LOC \ --name myLoadBalancer \ --sku Standard \ --public-ip-address myPublicIP \ --frontend-ip-name myFrontEnd \ --backend-pool-name myBackEndPool

3.     Add a health probe to the load balancer:

az network lb probe create \ --resource-group $RG \ --lb-name myLoadBalancer \ --name myHealthProbe \ --protocol tcp \ --port 80

4.     Create the load balancing rules:

az network lb rule create \ --resource-group $RG \ --lb-name myLoadBalancer \ --name myHTTPRule \ --protocol tcp \ --frontend-port 80 \ --backend-port 80 \ --frontend-ip-name myFrontEnd \ --backend-pool-name myBackEndPool \ --probe-name myHealthProbe \ --disable-outbound-snat true

5.     Add the first VM to the load balancer pool:

az network nic ip-config address-pool add \ --address-pool myBackEndPool \ --ip-config-name ipconfig1 \ --nic-name myNicVM1 \ --resource-group $RG \ --lb-name myLoadBalancer

6.     Add the second VM to the load balancer pool:

az network nic ip-config address-pool add \ --address-pool myBackEndPool \ --ip-config-name ipconfig1 \ --nic-name myNicVM2 \ --resource-group $RG \ --lb-name myLoadBalancer

7.     In the Azure portal, navigate to the VM provided with the lab.

8.     Click Networking in the left-hand menu.

9.     Click on the network interface.

10.  Copy its name.

11.  In Cloud Shell, set the NIC variable, replacing <NIC_NAME> with the name you just copied:

NIC="<NIC_NAME>"

12.  Add the lab-provided VM to the load balancer pool:

az network nic ip-config address-pool add \ --address-pool myBackEndPool \ --ip-config-name ipconfig1 \ --nic-name $NIC \ --resource-group $RG \ --lb-name myLoadBalancer

13.  Get the public IP of the load balancer:

az network public-ip show \ --resource-group $RG \ --name myPublicIP \ --query [ipAddress] \ --output tsv

14.  Copy the IP address in the output.

15.  Enter the IP address in a new browser tab. We should see a "Hello World" message from one of the VMs.

16.  Refresh multiple times to see the other VM names. (You may need to close and reopen the browser to see the others.)

 


12. Creating a Hub and Spoke Network with a DMZ and Allowing Access to Azure Arc and Other Microsoft URLs from the Azure Portal

12. Creating a Hub and Spoke Network with a DMZ and Allowing Access to Azure Arc and Other Microsoft URLs from the Azure Portal. 1. Create a...