Changing the License Type for an existing Virtual Machine Scale Set to use the Azure Hybrid Use Benefit

If you are an enterprise customer that has existing Windows Server licenses that you want to use in Azure, you can take advantage of the Azure Hybrid Use Benefit to bring those licenses to the cloud. We have those steps documented for a number of scenarios here: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/hybrid-use-benefit-licensing. However, what if you want to convert to AHUB once you’ve already deployed your VM Scale Sets? The methods described in the above article do not currently include how to do this for an existing VMSS deployment.
Here is the PowerShell to make the change:
$rg = “TestVMSS-RG” #change for your resource group
$VMScaleSetName = “PeteVMSS02” #change for your virtual machine scale set name
$vmss = Get-AzureRMVMss -ResourceGroupName $rg -VMScaleSetName $VMScaleSetName
$vmss.VirtualMachineProfile.LicenseType = “Windows_Server”
Update-AzureRmVmss -ResourceGroupName $rg -VMScaleSetName $VMScaleSetName -VirtualMachineScaleSet $vmss
Update-AzureRmVmssInstance -ResourceGroupName $rg -VMScaleSetName $VMScaleSetName -InstanceId “*”

Programmatically adding elements to an Azure ARM template using PowerShell

One of the great features of Azure ARM templates is that you can do template linking so that you can split your deployment into smaller components. This promotes better reuse as well as less unwieldy template files. However, one limitation today is that it is not easily possible to do conditional logic within your ARM template (see this blob post for one possible method using arrays: https://jodygblog.wordpress.com/2016/05/02/conditional-parameters-for-arm-templates/). However, this technique does not work in all situations. For example, if you wanted a base Windows/Linux template and wanted to have one base template that utilizes an availability set and one that did not, you would need to create two versions of the base template. This is definitely not ideal because now you have to make changes to two base templates instead of one.
A solution to this is to use PowerShell’s ability to load a JSON file, make changes as required in memory and then emit the JSON file back to disk. You still have two base template files but only one of them will need to be edited manually and the other will be a generated file.
Using our example of having two base Windows templates with and without an availability set, this section demonstrates how the additional JSON can be added using PowerShell. We would first need to have variables for the additional JSON for elements like parameters, resources, and properties:
param
(
[string]$InputJsonFilePath,
[string]$OutputJsonFilePath,
[bool]$IsManagedAvailabilitySet #if this is a managed availability set, set this to true otherwise false

)
$availabilitySetParam =@”
{
“type”:”string”,
“metadata”:
{
“description”: “Name of the availability set”
}
}
“@
$faultDomainParam = @”
{
“type”: “int”,
“defaultValue”: 3,
“metadata”: {
“description”: “Fault Domain Count”
}
}
“@
$updateDomainParam = @”
{
“type”: “int”,
“defaultValue”: 5,
“metadata”: {
“description”: “Update Domain Count”
}
}
“@

$availabilitySetResourceJson = @”
{
“type”: “Microsoft.Compute/availabilitySets”,
“name”: “[parameters(‘availabilitySetName’)]”,
“apiVersion”: “2016-04-30-preview”,
“location”: “[resourceGroup().location]”,
“tags”: {
“displayName”: “Availability Set”
},
“properties”: {
“platformFaultDomainCount”: “[parameters(‘faultDomainCount’)]”,
“platformUpdateDomainCount”: “[parameters(‘updateDomainCount’)]”,
“managed”: false
}
}
“@

$availabilitySetVMPropertiesJson = @”
{
“id”: “[resourceId(‘Microsoft.Compute/availabilitySets’ , parameters(‘availabilitySetName’))]”
}
“@

 
Next we would load our JSON file into memory and convert each of the JSON variables into JSON objects:
 
$JsonFile = Get-Content $InputJsonFilePath |Out-String
$JsonFile = ConvertFrom-Json $JsonFile

$availabilitySet = ConvertFrom-Json -InputObject $availabilitySetParam
$faultDomain = ConvertFrom-Json -InputObject $faultDomainParam
$updateDomain = ConvertFrom-Json -InputObject $updateDomainParam
$availabilitySetResource = ConvertFrom-Json -InputObject $availabilitySetResourceJson
$availabilitySetProperty = ConvertFrom-Json -InputObject $availabilitySetVMPropertiesJson

 
Now this is where the real magic happens. Smile We use Add-Member to add the new parameters:
 
$JsonFile.parameters |Add-Member -Name “availabilitySetName” -MemberType NoteProperty -Value $availabilitySet
$JsonFile.parameters |Add-Member -Name “faultDomainCount” -MemberType NoteProperty -Value $faultDomain
$JsonFile.parameters |Add-Member -Name “updateDomainCount” -MemberType NoteProperty -Value $updateDomain

 
For elements in the JSON file that are arrays, we use ‘+’ to add an element to the array:
$JsonFile.resources = $JsonFile.Resources + $availabilitySetResource
 
We now add a “DependsOn” as well as an AvailabilitySet property:
foreach ($vm in $JsonFile.Resources)
{
if ($vm.type -eq ‘Microsoft.Compute/virtualMachines’)
{
$vm.dependsOn = $vm.dependsOn + “[resourceId(‘Microsoft.Compute/availabilitySets’, parameters(‘availabilitySetName’))]”
$vm.properties |Add-Member -Name “availabilitySet” -MemberType NoteProperty -Value $availabilitySetProperty
}

    if ($vm.type -eq ‘Microsoft.Compute/availabilitySets’)
{
if ($IsManagedAvailabilitySet)
{
$vm.properties.managed = $true
}
else
{
$vm.properties.managed = $false
}
}
}

Lastly we write the JSON file back to disk. Note that by default ConvertTo-JSON has a depth of 2 which will likely not be sufficient. Also, it will escape characters by default.
ConvertTo-Json -Depth 10 -InputObject $JsonFile |Out-File $OutputJsonFilePath –Force
To remove the escape characters (which is purely for readability), you can reload the file and do a string replacement:
 
$jsonFile = $jsonFile.Replace(‘\u0027’, “‘”)
$jsonFile |Out-File -FilePath $OutputJsonFilePath -force

 
Well, that’s all there is to it. You could use this same technique in other ways such as having base template with or without managed disks, copying parameters to multiple files, etc.
 
Updated: Corrected API version in availability set resource JSON to 2016-04-30-preview.
 
 
Here is the complete PowerShell for your convenience:
param
(
[string]$InputJsonFilePath,
[string]$OutputJsonFilePath,
[bool]$IsManagedAvailabilitySet #if this is a managed availability set, set this to true otherwise false

)
$availabilitySetParam =@”
{
“type”:”string”,
“metadata”:
{
“description”: “Name of the availability set”
}
}
“@
$faultDomainParam = @”
{
“type”: “int”,
“defaultValue”: 3,
“metadata”: {
“description”: “Fault Domain Count”
}
}
“@
$updateDomainParam = @”
{
“type”: “int”,
“defaultValue”: 5,
“metadata”: {
“description”: “Update Domain Count”
}
}
“@

$availabilitySetResourceJson = @”
{
“type”: “Microsoft.Compute/availabilitySets”,
“name”: “[parameters(‘availabilitySetName’)]”,
“apiVersion”: “2016-04-30-preview”,
“location”: “[resourceGroup().location]”,
“tags”: {
“displayName”: “Availability Set”
},
“properties”: {
“platformFaultDomainCount”: “[parameters(‘faultDomainCount’)]”,
“platformUpdateDomainCount”: “[parameters(‘updateDomainCount’)]”,
“managed”: false
}
}
“@

$availabilitySetVMPropertiesJson = @”
{
“id”: “[resourceId(‘Microsoft.Compute/availabilitySets’ , parameters(‘availabilitySetName’))]”
}
“@

$JsonFile = Get-Content $InputJsonFilePath |Out-String
$JsonFile = ConvertFrom-Json $JsonFile

$availabilitySet = ConvertFrom-Json -InputObject $availabilitySetParam
$faultDomain = ConvertFrom-Json -InputObject $faultDomainParam
$updateDomain = ConvertFrom-Json -InputObject $updateDomainParam
$availabilitySetResource = ConvertFrom-Json -InputObject $availabilitySetResourceJson
$availabilitySetProperty = ConvertFrom-Json -InputObject $availabilitySetVMPropertiesJson

$JsonFile.parameters |Add-Member -Name “availabilitySetName” -MemberType NoteProperty -Value $availabilitySet
$JsonFile.parameters |Add-Member -Name “faultDomainCount” -MemberType NoteProperty -Value $faultDomain
$JsonFile.parameters |Add-Member -Name “updateDomainCount” -MemberType NoteProperty -Value $updateDomain
$JsonFile.resources = $JsonFile.Resources + $availabilitySetResource

foreach ($vm in $JsonFile.Resources)
{
if ($vm.type -eq ‘Microsoft.Compute/virtualMachines’)
{
$vm.dependsOn = $vm.dependsOn + “[resourceId(‘Microsoft.Compute/availabilitySets’, parameters(‘availabilitySetName’))]”
$vm.properties |Add-Member -Name “availabilitySet” -MemberType NoteProperty -Value $availabilitySetProperty
}

    if ($vm.type -eq ‘Microsoft.Compute/availabilitySets’)
{
if ($IsManagedAvailabilitySet)
{
$vm.properties.managed = true
}
else
{
$vm.properties.managed = false
}
}
}

ConvertTo-Json -Depth 10 -InputObject $JsonFile |Out-File $OutputJsonFilePath -Force 
$JsonFile = Get-Content $OutputJsonFilePath |Out-String
$jsonFile = $jsonFile.Replace(‘\u0027’, “‘”)
$jsonFile |Out-File -FilePath $OutputJsonFilePath -force

 

Move the Azure temporary disk to a different drive letter on Windows Server

On occasion you may have a need to move the Azure temporary drive to a different drive letter. Azure by default is set to use the D drive. This drive letter configuration may conflict with existing scripts or company OS installation standards. I’ve created an ARM template that uses PowerShell DSC to allow you to move the drive letter. It performs the following steps:
1) Disables the Windows Page File and reboots the VM
2) Changes the drive letter from the D drive to a drive letter you specify in the ARM template parameters file
3) Re-enables the Windows page file and reboots the VM
This project is in GitHub here: https://github.com/perktime/MoveAzureTempDrive. To use it, modify the azuredeploy.parameters.json file with your vmName and your desired tempDriveLetter:
{
“$schema”: “
https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#”,
“contentVersion”: “1.0.0.0”,
“parameters”: {
“vmName”: {
“value”: “<put_your_existing_vm_name_here>”
},
“assetLocation”: {
“value”: “
https://petedscutil.blob.core.windows.net/scripts”
    },
“tempDriveLetter”: {
“value”: “Z
}
}
}

Optionally, you can copy the MoveAzureTempDrive.ps1.zip DSC file to your own Azure storage account and modify the assetLocation parameter as well. Also, if you have an existing DSC extension you will have to remove it before deploying this.
If you are interested in how this works, here is the explanation (note: assuming you understand how PowerShell DSC works):
To disable the Windows page file, we use “gwmi win32_pagefilesetting” which uses WMI to first check if the page file is enabled or not. If it is, we use this script to delete it and restart the VM:
gwmi win32_pagefilesetting
$pf=gwmi win32_pagefilesetting
$pf.Delete()
Restart-Computer –Force

Once the VM restarts, the PowerShell DSC module will then change the drive letter to your desired drive and then re-enable the page file and reboot:
Get-Partition -DriveLetter “D”| Set-Partition -NewDriveLetter $TempDriveLetter
$TempDriveLetter = $TempDriveLetter + “:”
$drive = Get-WmiObject -Class win32_volume -Filter “DriveLetter = ‘$TempDriveLetter’”
#re-enable page file on new Drive
$drive = Get-WmiObject -Class win32_volume -Filter “DriveLetter = ‘$TempDriveLetter’”
Set-WMIInstance -Class Win32_PageFileSetting -Arguments @{ Name = “$TempDriveLetter\pagefile.sys”; MaximumSize = 0; }

Restart-Computer -Force

Install SQL Server onto an Azure VM using PowerShell DSC

The Azure marketplace has quite a few prebuilt virtual machines with SQL Server already in them from versions SQL Server 2008R2 to SQL Server 2016. You can also use the BYOL versions to provide your own SQL Server license if you prefer. However, you may still wish to have more control over the installation process such as the SQL Server instance name, install location, installed features, etc. I’ve created a GitHub project here:
https://github.com/perktime/InstallSQLServerByDSCForAzure
These ARM templates will create a new base Windows VM using an Azure marketplace image, domain join the VM into an existing Windows AD domain and use PowerShell DSC to install SQL Server from Azure Files.
The DSC uses the xSQLServer PowerShell module from here: https://github.com/PowerShell/xSQLServer where you will also find additional documentation. Also note that currently not all potential parameters for SQL Server setup are implemented in the ARM template. You could either add them yourself to the ARM templates and SQLInstall.ps1 file or let me know and I might update Winking smile
Prerequisites
Before you can use this solution, you will need to create a storage account (or use an existing one) and enable Azure Files for it.
1) Go to the Azure portal and create a new storage account:
image
2) Once the storage account is done creating, you will need to create a file share for it. Click Files and then click “+ File Share”
image
3) Supply a name and a quota and click Create
image
4) You will then need to download a version of SQL Server, create a folder for it in Azure Files and copy the SQL Server install files into that folder. You may find it quickest to do this right from an existing Azure VM in the same region to access the Microsoft VLSC site or MSDN or download a copy of SQL Server Developer Edition here: https://www.microsoft.com/en-us/sql-server/sql-server-editions-developers. Note that if you get the ISO, you will need to extract the files out of the ISO as this template currently does not support directly installing from the ISO.
5) You will need to take note of the access key for this storage account as well as the Azure Files URL so that you can supply them to the azuredeploy.parameters.json file. If you click “Connect” on the file share, you can see the Azure Files UNC path as well as click the link for the access keys:
image
6) Next, you will need to copy the SQLinstall.ps1.zip and DeployWindowsVM.json files into your Azure Blob storage account (note: not in Azure Files). Using a tool like Azure Storage Explorer, copy these 2 files into a blob container that has public read access enabled:
image
Updating the azuredeploy.parameters.json file
The azuredeploy.parameters.json file has a number of parameters that you will need to update such as:
vmName: The computer name of the VM
vmSize: The desired Azure VM size and series. It is recommended that you use a series that supports SSD storage such as DS, GS or FS.
assetLocation: The location in Azure blob storage where the SQLInstall.ps1.zip and DeployWindowsVM.json are deployed into an Azure blob storage container with public read access.
AdminUserName: The local Windows administrator account
AdminPassword: The local Windows administrator account password
DomainUserName: The domain username that has domain join permissions
DomainPassword: The domain user’s password
existingDomainName: The name of the Windows domain you will be joining
existingOUPath (optional): The OU where you want the computer account placed in Active Directory
existingVirtualNetworkName: The existing Azure virtual network where this VM will be placed
existingVirtualNetworkResourceGroup: The existing Azure virtual network resource group
storageAccountUri: The existing Azure blob storage account for this VM’s disks. Premium storage is recommended for SQL Server
bootdiagnosticsstorageAccountUri: The existing Azure blob storage account for boot diagnostics. Must be standard storage
windowsOSVersion: The version of Windows Server to use for the VM. Note that not all versions of SQL Server may be supported on all versions of Windows Server
subnetName: The existing subnet name where this VM will be placed
FileShareUserName: The Azure Files username. It should be the same as the first part of the Azure Files UNC path (e.g. if your Azure Files is \\peteazurefiles.file.core.windows.net then the username would be azurefiles
FileSharePassword: The Azure Files access key.
InstallDir: The folder where the SQL Server files are located (e.g. sql2016). It is not the full path
PackagePath: The path to Azure Files directory where the SQL Server install files are location (e.g. \\\\peteazurefiles.file.core.windows.net\\installs”)
location: The Azure data center location you wish to use
SQLAgentUserName: The domain\username for the SQL Agent account
SQLAgentPassword: The password for the SQL Agent account
SQLSAAccountPassword: The SQL SA Account password
SQLServiceUserName: The domain\username for the SQLService account
SQLServicePassword: The password for the SQLService account
Features: The installed features for SQL Server (SQLENGINE,FULLTEXT). Note that not all versions of SQL Server support the same features.
UpdateSource: This is the location where SQL Server setup searches for product updates. Use “MU” if you want to have SQL Server use Windows Update.
UpdateEnabled: This determines if SQL Server should update itself or not. Can be true or false.
InstallSharedDir: The installation path for shared SQL Files
InstallSharedWOWDir: The installation path for x86 shared SQL files
SQLInstanceName: The name of the SQL Instance
SQLInstanceDir: The installation path for the SQL instance files
SecurityMode: The SQL Security mode (either Windows or SQL). SQL is also known as Mixed Mode
SQLSysAdminAccounts: Array of accounts to be made SQL administrators.
 
Troubleshooting
In the event that your deployment fails and it’s because of an invalid parameter, Azure may not provide a helpful error message in this case and you may see this:
image
If this occurs, your best bet is to look at the SQL Server log file (e.g. “C:\Program Files\Microsoft SQL Server\130\Setup Bootstrap\Log\Summary.txt”)

Step by Step: how to resize a Linux VM OS disk in Azure (ARM)

Update 06/18/2018: This article has been superseded by this one from Azure support: https://blogs.msdn.microsoft.com/linuxonazure/2017/04/03/how-to-resize-linux-osdisk-partition-on-azure

The default OS disk size for most Linux distros in Azure is 30GB. While Linux makes it easy to add other disks as mount points, you may wish to increase the size of the OS disk using the steps in this article.
Here’s what you need to do. I used a CentOS 6.8 Linux VM from the Azure Marketplace in this example. Default filesystem is ext4. With CentOS/RHEL 7.x, the default file system is XFS. On Ubuntu it is not necessary to do steps 2-11 as it automatically will resize the disk on boot.
Note: Before proceeding it’s highly recommended that you backup your Azure VM first. You can do this using Azure Backup or use AzCopy to make a copy of your VHD.
1) Resize the OS disk using these PowerShell cmdlets or the Azure CLI. The VM needs to be in stopped (deallocated) state to run these commands
PowerShell:
$rg = “YourResourceGroupName”
$vmName = “YourVMName”
$vm = Get-AzureRmVM -ResourceGroupName $rg -Name $vmName
$vm.StorageProfile[0].OSDisk[0].DiskSizeGB = 127  # change the size as required
Update-AzureRmVM –ResourceGroupName $rg -VM $vm
 
Azure CLI:
az vm update –resource-group YourResourceGroupName –name YourVMName –set storageProfile.osDisk.diskSizeGB=1024
 
2) Start your Linux VM. Login to your Azure VM using SSH:
image
As you can see the OS disk is 30GB.
3) Run this command: sudo fdisk /dev/sda
4) Type “u” to change the units to sectors.
5) Type “p” to list the current partition details. Note the starting sector (e.g. 2048).
6) Once you are in fdisk, delete the partition (note: you aren’t deleting the data, just altering the partition table). Type “d” and then select the partition (if required as it will choose partition 1 if it’s the only partition).
7) Create a new partition with “n”. Type p to create a primary partition. Type 1 to create the first partition (or another partition number, if required). Use the same starting sector from step 5 and use the desired end sector or accept the default end sector to use the entire disk.
8) Type “a” and select partition 1 to mark the boot partition as active. Type “p” to to ensure all settings are correct:
image
9) Write the partition with “w”. You will get a warning that says: WARNING: Re-reading the partition table failed with error 16: Device or resource busy. This is normal.
10) Reboot using “sudo reboot”
11) Once the VM is up and running, login to your Azure VM using SSH and type “sudo resize2fs /dev/sdaX” to resize the filesystem for CentOS/RHEL 6.x (where X is the partition number you created in step 7. In CentOS/RHEL 7.x the command is “xfs_growfs -d /dev/sdaX”. This may take some time to complete.
12) Verify the new size with df -h
image
 
Now go and enjoy your new bigger OS disk!
Updated 1/3/2017: Thanks to Terry Charles for noting that step 8 to mark the boot partition was inadvertently omitted. Also, thanks to rhelguy and Sherif Adel for the correct resize command for CentOS/RHEL 7.x.

Don’t disable the IP Helper Windows service in your Windows VM’s running in Azure

I ran into a problem with a customer while we were trying to deploy Windows VM’s in Azure. We had SysPrep’d and generalized the VM and captured the image using PowerShell using the steps in this article: https://msdn.microsoft.com/en-us/library/mt619423.aspx. We then deployed the VM using an ARM template. However, after roughly 90 minutes, the deployment would fail with this error:
statusCode:Conflict
statusMessage:{“status”:”Failed”,”error”:{“code”:”ResourceDeploymentFailure”,”message”:”The resource operation completed with terminal provisioning state ‘Failed’.”,”details”:[{“code”:”OSProvisioningTimedOut”,”message”:”OS Provisioning for VM ‘PeteVM01’ did not finish in the allotted time. The VM may still finish provisioning successfully. Please check provisioning state later.”}]}}
And the portal showed this:
image
 
Even though the deployment failed, I could still RDP into the VM just fine. The fix for this turned out to be quite simple. The IP Helper Service had been disabled. Setting it to Automatic fixed the issue.
I had to do this before I ran sysprep and deployed the image using the ARM template. It did not work to change this after it was deployed.

Run IOMeter on an Azure Linux VM with Azure Premium Storage

I had to do some IO testing for a customer with Azure Premium Storage. I was able to get some numbers using this command:
dd if=/dev/zero of=test.dat bs=1M count=10000;rm –f test.dat
However, wouldn’t it be cool if we could use the same IOmeter as Windows users in Linux? Well, it turns out this is actually possible!
I first created a 4 disk RAID 0 array on my DS3 size VM using these instructions: http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-configure-raid/
Once I had my RAID array, I then followed the instructions to add GNOME to my Linux VM: http://blogs.msdn.com/b/cloud_solution_architect/archive/2015/05/02/remote-desktop-to-your-linux-azure-virtual-machine.aspx
I used CentOS 7.1 from the Azure VM Gallery. Unfortunately, it turns out that you can’t just sudo yum install wine to get Windows application support in Linux because IOMeter is a 32-bit Win32 application and CentOS 7.1 only supports 64-bit Windows apps out of the box.
This Wiki article talks about how to install IOMeter on Linux: http://www.linuxintro.org/wiki/Iometer. However, there are a few issues with it. Here’s what I did:
wget http://downloads.sourceforge.net/project/iometer/iometer-stable/2006-07-27/iometer-2006_07_27.linux.i386-bin.tgz
tar xvzf iometer-2006_07_27.linux.i386-bin.tgz
cd iometer-2006_07_27.linux.i386-bin
cd src
chmod +x dynamo
sudo yum install ld-linux.so.2
sudo yum install libstdc++.so.6
sudo ./dynamo
This guide talks about how to install Wine on CentOS 6 (also works on 7.1): http://stackoverflow.com/questions/20971960/the-right-way-to-install-wine-on-centos-6-64bit. Here’s what I did:
1) Open another terminal window
2) Type this in the command window to download the Wine source:
wget http://downloads.sourceforge.net/project/wine/Source/wine-1.7.42.tar.bz2
3) Type these commands in the terminal window:
# sudo yum -y groupinstall 'Development Tools'
# sudo yum -y install libX11-devel freetype-devel
# sudo yum install alsa-lib-devel.i686 libsndfile-devel.i686 readline-devel.i686 glib2.i686 glibc-devel.i686 libgcc.i686 libstdc++-devel.i686 pulseaudio-libs-devel.i686 cmake portaudio-devel.i686 openal-soft-devel.i686 audiofile-devel.i686 freeglut-devel.i686 lcms-devel.i686 libieee1284-devel.i686 openldap-devel.i686 unixODBC-devel.i686 sane-backends-devel.i686 fontforge libgphoto2-devel.i686 isdn4k-utils-devel.i686 mesa-libGL-devel.i686 mesa-libGLU-devel.i686 libXxf86dga-devel.i686 libXxf86vm-devel.i686 giflib-devel.i686 cups-devel.i686 gsm-devel.i686 libv4l-devel.i686 fontpackages-devel ImageMagick-devel.i686 openal-soft-devel.i686 libX11-devel.i686 docbook-utils-pdf libtextcat tex-cm-lgc
# sudo yum install alsa-lib-devel audiofile-devel.i686 audiofile-devel cups-devel.i686 cups-devel dbus-devel.i686 dbus-devel fontconfig-devel.i686 fontconfig-devel freetype.i686 freetype-devel.i686 freetype-devel giflib-devel.i686 giflib-devel lcms-devel.i686 lcms-devel libICE-devel.i686 libICE-devel libjpeg-turbo-devel.i686 libjpeg-turbo-devel libpng-devel.i686 libpng-devel libSM-devel.i686 libSM-devel libusb-devel.i686 libusb-devel libX11-devel.i686 libX11-devel libXau-devel.i686 libXau-devel libXcomposite-devel.i686 libXcomposite-devel libXcursor-devel.i686 libXcursor-devel libXext-devel.i686 libXext-devel libXi-devel.i686 libXi-devel libXinerama-devel.i686 libXinerama-devel libxml2-devel.i686 libxml2-devel libXrandr-devel.i686 libXrandr-devel libXrender-devel.i686 libXrender-devel libxslt-devel.i686 libxslt-devel libXt-devel.i686 libXt-devel libXv-devel.i686 libXv-devel libXxf86vm-devel.i686 libXxf86vm-devel mesa-libGL-devel.i686 mesa-libGL-devel mesa-libGLU-devel.i686 mesa-libGLU-devel ncurses-devel.i686 ncurses-devel openldap-devel.i686 openldap-devel openssl-devel.i686 openssl-devel zlib-devel.i686 pkgconfig sane-backends-devel.i686 sane-backends-devel xorg-x11-proto-devel glibc-devel.i686 prelink fontforge flex bison libstdc++-devel.i686 pulseaudio-libs-devel.i686 gnutls-devel.i686 libgphoto2-devel.i686 openal-soft-devel openal-soft-devel.i686 isdn4k-utils-devel.i686 gsm-devel.i686 samba-winbind libv4l-devel.i686 cups-devel.i686 libtiff-devel.i686 gstreamer-devel.i686 gstreamer-plugins-base-devel.i686 gettext-devel.i686 libmpg123-devel.i686
$mkdir wine64
$ cd wine64
$ ../wine-1.7.42/configure –enable-win64
$ make
$ cd ..
$ mkdir wine32
$ cd wine32
$ ../wine-1.7.42/configure –with-wine64=../wine64
$ make
# sudo make install
# cd ../wine64
# sudo make install
$cd wine32
$wget http://downloads.sourceforge.net/project/iometer/iometer-stable/2006-07-27/iometer-2006.07.27.win32.i386-setup.exe
$wine iometer-2006.07.27.win32.i386-setup.exe
$ cd “.wine/drive_c/Program Files (x86)/Iometer.org/Iometer 2006.07.27″
$sudo Iometer.exe
Once IOMeter launched, I selected my RAID disk for each of the workers:
image
With a 4K Read Access Specification for each worker thread: (Note: Host cache for each data disk was set to none; these are not official benchmarks and are specific to my environment)
image
This is actually a littler better than the DS3 VM Specification:
image
The same test on a scaled up DS 13 with the same 4 disk RAID 0 array:
image

Remote Desktop to your Linux Azure Virtual Machine

If you’ve ever wished you could get a GUI experience with your Azure Linux VM’s, here’s how you can do it. While I’m not suggesting you should do this for production VM’s that are running server workloads, there are times when it could be useful to get a full GUI with Linux. If you are onboard, here’s what you need to do.
Note: If you want you could just follow the steps for getting VNC installed and be done. However, being able to use an RDP client from any Windows machine without installing anything could be more convenient.
I used CentOS 7.1 from the Azure gallery but other RedHat based Linux distros will probably work (e.g. Oracle Linux)
1) Login to your Linux VM
2) Install the GNOME Desktop using this command:
sudo yum groupinstall “GNOME Desktop” “Graphical Administration Tools”.
This will take several minutes
3) Install TigerVNC:
sudo yum install tigervnc-server xorg-x11-fonts-Type1
4) Copy the vncserver.service file:
sudo cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
5) Using something like vi, edit /etc/systemd/system/vncserver@:1.service. Look for the <USER> tags in the file and replace with your Linux username.
# Clean any existing files in /tmp/.X11-unix environment
ExecStartPre=/bin/sh -c ‘/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :’
ExecStart=/sbin/runuser -l <USER> -c “/usr/bin/vncserver %i”
PIDFile=/home/<USER>/.vnc/%H%i.pid
ExecStop=/bin/sh -c ‘/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'[Install]
WantedBy=multi-user.target
6) If you are running a firewall, you may need to open the ports we will need:
firewall-cmd –permanent –zone=public –add-port=5901/tcp
firewall-cmd –permanent –zone=public –add-port=3389/tcp
firewall-cmd –reload
7) Install XRDP using these commands:
sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
sudo
rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm
sudo yum install xrdp
sudo chcon -t bin_t /usr/sbin/xrdp*
sudo systemctl start xrdp.service
sudo systemctl enable xrdp.service
sudo systemctl start xrdp-sesman.service
8) Start VNCServer
vncserver
You will get prompted to enter a VNC password
9) Verify that VNCSever and XRDP are running with netstat –ant:
image
10) Next add the endpoints for RDP and VNC to your Linux VM. It’s probably a good idea to use ACLs to restrict access from a particular remote subnet (see this: http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-set-up-endpoints/). Go to your Linux VM in the Azure Management Portal and click on EndPoints. Add an Endpoint for RDP and VNC on ports 3389 and 5901. I picked a random port for RDP (you could do the same for VNC):
image
image
11) At this point you can test connectivity using a VNC Viewer:
 
image
image
12) Next, try a Remote Desktop Connection:
image
image
Success!
image
12) (optional) If you don’t need VNC exposed externally, you can delete the Azure endpoint and just use RDP