Before you even start putting together a plan for migrating an environment to Azure, you’ll want to know some of the specifics of that environment, for example:
- VM compute – vCPU and RAM footprint
- Disk performance usage – Peak Read and Write IOPS (Data Churn)
You’ll also want to know things like:
- Required on-premises infrastructure footprint to facilitate replication
- How much bandwidth is required to meet your chosen RPO (Recovery Point Objective)
- VM replication eligibility – e.g. Server 2008 32 bit is not supported in Azure
- Indicative costs for replication to Azure, including scheduled DR Drills
Enter the Azure Site Recovery (ASR) Deployment Planner Tool. In this guide we’re going to go through the step-by-step process for both profiling our on-premises environment and then generating a report based on that information…so let’s get to it.
In our example we’ll be profiling an existing VMware environment and checking its viability for replication to Azure, as such we need to be aware of the following support matrix:
As the environment I’ll be replicating meets those criteria, let’s move on to the prerequisites 🙂
As all we’ll be running through here is the profiling of a small VMware environment and subsequent report generation, I’ve set up the following VM to do it all from:
- 2 vCPU and 4 GB RAM – don’t need anything greater for this guide
- .NET Framework requirements met
- VMware PowerCLI requirements met
NOTE: HAS to be PowerCLI version 6.0* as anything greater will not work…I tested it 🙂
- C++ Requirements met
- Internet access to Azure
- We don’t need much in the way of disk space due to the amount of VMs being profiled and how long we’re running the task for
- Connectivity on port 443 to our vCenter server
- Read-Only user credentials for our vCenter server
- Running Excel 2016
- Downloaded the ASR Deployment Planner Tool and unzipped it to C:\Temp\ASRDeploymentPlanner\
OK, so that should be it for prerequisites, let’s get on to the fun stuff.
Profiling Our VMware Environment
Assuming all the above conditions are met, we should now be able to crack on and kick off a profiling run, with that in mind:
- Launch an elevated PowerCLI console
- Now we’ll want to check the current “Execution Policy”, if it’s set to “Restricted” change it to “AllSigned”
Get-ExecutionPolicy Set-ExecutionPolicy AllSigned
- Navigate to your ASR Deployment Planner Tool Directory, (C:\Temp\ASRDeploymentPlanner if you’re following along)
- Create a directory named “Profiling_Data”
Now we’ll need to connect to our vCenter server:
- Within your PowerCLI console, run the PowerShell below (enter appropriate credentials, I’m using “Administrator” as it’s a lab environment)
Connect-VIServer -Server "IP or FQDN of vCenter Server" -Username "Username@vCenterDomain" -Password "YourPassword"
Now we need to get a list of VMs running on our VMware environment…but we only want the names:
- Within your PowerCLI console, run the PowerShell below (update the Out-File path for your environment if required)
(Get-VM).Name | Out-File C:\Temp\ASRDeploymentPlanner\Profiling_Data\VMNamesList.txt -Verbose
The resulting file should look something like this:
Now that we’ve got everything we need, let’s kick off a profiling task. For the purposes of this guide we’ll be configuring this profiling as below:
- Operation = StartProfiling
- Virtualization = VMware
- Directory = “C:\Temp\ASRDeploymentPlanner\Profiling_Data\vCenter_ProfiledData08042019_1”
NOTE: The tool will create that last directory for us, the naming convention I’ve went with here is: Target Server\Directory Contents\Date_Run number for that day
- Server = vCenter IP or FQDN
- User = vCenter user with appropriate permissions
- VMListFile = “C:\Temp\ASRDeploymentPlanner\Profiling_Data\VMNamesList.txt” (Path to the file we created above)
- NoOfHoursToProfile = 2
In reality you’d want to run this profiling task for at least 15 – 30 days to get a more accurate picture but for the purposes of this guide we’ll be running it for 2 hours as I can’t wait that long.
You can find a full list of the CmdLets parameters in the official documentation HERE
For our purposes, the full command looks like this:
- Within your PowerCLI console, run the PowerShell below (updating paths as required):
Set-Location -Path C:\Temp\ASRDeploymentPlanner .\ASRDeploymentPlanner.exe ` -Operation StartProfiling ` -Virtualization VMware ` -Directory "C:\Temp\ASRDeploymentPlanner\Profiling_Data\vCenter_ProfiledData08042019_1" ` -Server 10.1.12.131 ` -User firstname.lastname@example.org ` -VMListFile "C:\Temp\ASRDeploymentPlanner\Profiling_Data\VMNamesList.txt" ` -NoOfHoursToProfile 2
NOTE: -Directory will be created if it doesn’t already exist.
NOTE: You will be prompted for the vCenter password when you execute the above code.
The tool will now:
- Check prerequisites
- Check connectivity to vCenter/ESXi
- Validate VMs it’ll be profiling
Assuming you did everything correctly, a happy profiling job should look like this:
Notice the number of VMs being profiled lines up with what’s in our VMNamesList.txt file
NOTE: As highlighted in the above screenshot, the PowerCLI console MUST be left open until the profiling job has completed or it will fail and need to be redone. In the event of a server crash/power down. In an event like this, the profiling data up to that point is preserved, possibly minus the last 15 minutes. You would then restart the profiling task for the remainder of your required window.
Time to go grab a coffee and get on with some other work…back in two hours 🙂
Generate Deployment Planner Report
Ahhh…coffee. With the profiling task complete, your target directory should look something like this:
There should be a .CSV for each of the VMs profiled and a couple of .XML used during the generation of the report…which we’ll do now.
As mentioned above, I prepared a machine that I can run both the profiling and report generation task on as this simplifies things from a guide perspective, as such I’ll be running the command locally, but you can run it and target a UNC path in your generating the report from a different machine. For full details on how to do that, look HERE
The report generation task will make use of the files in the above directory to output a macro enabled Excel spreadsheet (XLSM file).
As with the profiling task, here’s a quick overview of what parameters we’ll be providing to the CmdLet:
- Operation = GenerateReport
- Virtualization = VMware
- Directory = “C:\Temp\ASRDeploymentPlanner\Profiling_Data\vCenter_ProfiledData08042019_1”
- Server = vCenter IP or FQDN
- User = vCenter user with appropriate permissions
- VMListFile = “C:\Temp\ASRDeploymentPlanner\Profiling_Data\VMNamesList.txt” (Same file we created for the profiling task)
NOTE: NOT the VMdetailList.xml file that was created by the profiling task
- DesiredRPO = 5
- TargetRegion = UKSouth
- SubscriptionId = “We’re providing this to get more accurate indicative pricing”
- Currency = GBP
As few things worth noting:
- By default the report will assume Managed Disks but this can be overridden by specifying the -UseManagedDisks parameter (Accepts Yes/No input)
- When using managed disks we can ignore the Storage Account layout section of the report, more on that later.
- We specify both the -SubscriptionId and -Currency parameters so that our report outputs the most accurate indicative pricing
- As we specified the -SubscriptionId parameter, we’ll be asked to enter Azure AD credentials with access to that tenant/subscription when we execute the command
- By default the report assumes 30% growth, this can be overridden using the -GrowthFactor parameter.
- You can run as many different reports as you want using the same profiling data e.g. you need to know the indicative costs if you deploy to the CentralUS region with a growth rate of 50% and an RPO of 15 minutes…just change the parameter values.
The full command looks like:
- Within your PowerCLI console, run the following PowerShell (update parameters as required)
.\ASRDeploymentPlanner.exe ` -Operation GenerateReport ` -Virtualization VMware ` -Directory "C:\Temp\ASRDeploymentPlanner\Profiling_Data\vCenter_ProfiledData08042019_1" ` -Server 10.1.12.131 ` -VMListFile "C:\Temp\ASRDeploymentPlanner\Profiling_Data\VMNamesList.txt" ` -DesiredRPO 5 ` -TargetRegion UKSouth ` -SubscriptionId "YourSubscriptionId" ` -Currency GBP
Assuming a successful report run, your output should look like this:
The file path to your new report can be found at the end out the console output.
Analyse and Work With the Report
When you open your report you’ll notice there are 6 worksheets, let’s have a look at each one in turn:
As the name suggests, this worksheet provides a summary of our report based on the parameter values we fed it. From here we can immediately see if we have any VMs that are incompatible with Azure but there’s a section dedicated to that later so let’s move on, however, if you want a full explanation for each of these fields, you’ll find it HERE
This worksheet is a biggie, so we’ll break it down into sections:
The first one isn’t really a section in its own right but bears mentioning as you can view some specifics for your report and change the desired RPO requirement (choices here are 5, 15 and 30 minutes):
Again, this part is fairly self-explanatory, one of the nice features being the ability to click the links and jump straight to the section you want to view e.g. check why a given VM isn’t compatible etc.
Required Network Bandwidth (Mbps)
Here we can see what network bandwidth is required to meet our configured RPO for X percentage of the time. As we mentioned above, we can change the Desired RPO at the top of the page, that will have an impact on this section. You’ll also notice it’s recommending we use ExpressRoute…Microsoft recommend it for all enterprise ASR deployments though but I’ll leave that choice up to you :). By default ExpressRoute is costed into the output (ExpressRoute costs only, not network supplier costs), but more on this later.
Required Azure Storage Accounts
You’ll remember from earlier that by default the report is set to use managed disks, as such we don’t need to concern ourselves with this section beyond the fact it’s advising we make use of Standard storage based on the observed VM disk IO and data churn.
Required Number of Azure Cores and Required On-Premises Infrastructure
In this section we can see how many vCPUs we’re likely to need in our Azure subscription based on the VM SKUs chosen by the report…more on this later.
We’re also given breakdown of how many configuration and process servers we’re likely to require to allow replication of our VM estate. There are also a couple of handy links on how to manage subscription core limits and relevant how-to guides for deploying the required On-Premises components but I’ll be covering that deployment in a later guide.
Recommended VM Batch Size for Initial Replication
In this section we can see how many VMs should be batched together for their initial replication given X amount of bandwidth and a 72 hour target. It also shows much bandwidth will be required as the on-premises environment grows while still looking to meet the same 72 hour initial replication target.
In this section we’re given an indicative cost, which can be changed between “Month” and “Year” by selecting from the drop-down. The provided chart shows how much each component is of the whole as both a percentage and currency. By clicking the “View detailed cost analysis” link it’ll take you to the “Cost Estimation” worksheet where we can modify the recommendations/add VMs etc. which will be reflected on the overview…but more on that later.
Growth Factor and Data Percentiles
This last section shows the growth factor and percentiles that were used. As mentioned earlier, you can change the growth factor when creating the report, as for the performance metrics percentiles, these can be changed by modifying the file named “ASRDeploymentPlanner.exe.config” in the root of the ASR Deployment Planner Tool folder.
Microsoft don’t recommend changing these values, but if you have reason to, you would modify the lines below and save the file:
<add key=”WriteIOPSPercentile” value=”95″ />
<add key=”ReadWriteIOPSPercentile” value=”95″ />
<add key=”DataChurnPercentile” value=”95″ />
VM Storage Placement
The information in this section is based on the VM storage IOPS and data churn gathered during the profiling task. As such you may be presented with two rows, one with “Standard” (HDD) as the “Replication Storage Type” and one as “Premium” (SSD). The last column in the table, “VMs to Place” shows which VMs will be deployed onto each storage type.
Here’s a quick breakdown of each of the remaining columns and their relevance:
- Suggested Prefix – Exactly that, a prefix for the Storage account name that meets Microsoft suggested naming convention, change at your leisure.
- Suggested Account Name – You would replace <standard> with a name of your choosing e.g. gemmystorageaccount.
- Log Storage Account – Replication logs are stored with the VMs when the storage type is “Standard”, when “Premium” a separate storage account is created.
- Suggested Log Account Prefix/Name – Same as above
- Placement Summary – Provides a view of the overall VM load on the storage account i.e. number of VMs, IOPS, disk SKUs and their number of.
- Virtual Machines to Place – A list of which VMs should be deployed to each storage account for optimal performance.
We are also provided with total breakdown, which make much more sense if your replication requires more than a single storage account and you’re not using managed disks…which you should be! 🙂
The following section shows a breakdown of each compatible VM picked up during the profiling task. It includes the following:
- Suggested Storage Type
- Azure disk SKU
- Azure VM SKU
- Assigned VMware vCPU and RAM
- Number of NICs
- Boot Type
- OS Type
One thing to pay attention to is the “VM Compatibility” column as this can display one of two possible values, “Yes” and “Yes*”. The official documentation says the following on the subject of “Yes*”:
Now this may be true but in both my case and the example in the Microsoft example, it appears to be because of the OS version of the VM, specifically:
“Unable to get version information for the operating system”. In my case, this OS version is supported in Azure so we’re good.
In Azure disks are deployed using a specific SKU which have a set size with set IOPS and throughput limits depending on that SKU. When replicating our VMs to Azure disks will be matched up to closest SKU e.g. if your VM OS disk is 80GB, an S10 disk (128GB) will be deployed as it’s the closest match. However it’s not always quite so straightforward as the rules differ slightly depending on the storage type:
For “Standard” storage, disks SKUs are provisioned depending on the disks used capacity e.g. If your OS disk is 128 GB but the used capacity is only 45 GB, an S6 will be deployed as it’s the closest match up at 64 GB.
For “Premium” storage, disks are deployed as the closest match up to the size of the disk irrelevant of the used capacity e.g. If your OS disk is 128 GB, a P10 will be deployed as it’s the closest match up at 128 GB.
You can find a full list of the available Azure disk SKUs and their associated limits HERE
You can also find additional information for this section on the official documentation HERE
- OS disk size is greater than 2048 GB – Managed Disk OS disk limit.
- Data disk size is greater than 4095 GB – Official Managed Disk documentation actually shows this limit to be 32,767 GB.
- This may be an ASR limitation or just out of date documentation.
- Total VM Size (Replication and Test Failover) exceeds the supported storage account limit of 35 TB – You should only see this error when ASR is suggesting premium storage for the VM based on the required IOPS/Data churn.
- This limit is 35 TB for Premium storage accounts and 2 PB for Standard accounts (US and Europe only, all other regions are 500 TB).
- Source IOPS exceeds supported storage IOPS limit of 7500 per disk.
- Source IOPS exceeds supported storage IOPS limit of 80,000 per VM – Provided by the “Standard_L80s_v2” VM SKU at the time of writing.
- Average data churn exceeds supported ASR data churn limit of 10 MB/s for average I/O size of the disk.
- Average data churn exceeds supported ASR data churn limit of 25 MB/s for average I/O size of the VM (all disks combined).
- Peak data churn across all disks on the VM exceeds the maximum supported ASR peak data churn limit of 54 MB/s per VM.
- Average effective write IOPS exceeds the supported Site Recovery IOPS limit of 840 for disk.
- Calculated snapshot storage exceeds the supported snapshot storage limit of 10 TB.
- Total data churn per day exceeds supported churn per day limit of 2 TB by a Process Server.
NOTE: Most of the above is accurate at the time of writing but is likely subject to change as is typical of cloud services, link to official documentation HERE
The following table shows ASRs limits at the time of writing. These limits are based on Microsoft’s own testing and in reality differ depending on I/O size, overlap etc.
When you’re running through your own planning it’s worth checking in with the official documentation, HERE in case these figures have changed.
This is where things get a little most interesting, assuming you’re not already completely captivated of course 🙂
The Cost Estimation worksheet is where we can make changes to our proposed ASR deployment and being a macro enabled XLS, these changes will be reflected in our indicative costs.
There’s a lot of detail in this worksheet and most of it is self-explanatory so I won’t go through it all but here’s a quick idea of what it should look like:
Let’s run through a quick scenario. What the report has given us is great but let’s say we want to make a few changes and get an idea of what this may cost us.
- I know I’m about to deploy another VM with an almost identical footprint to “df-jumpbox” so I can update “Number of VMs” to “2” to account for that
- I also know that the required IOPS requirement of these VMs is going to require “Premium” storage soon, so I’ll update the following:
- Change “IaaS size (Your selection)” to a VM SKU that supports premium disks, in this case “Standard_DS3_v2”
- Change the “Storage type” to “Premium”
- The default “Number of DR-Drills in a year” is “4”, I don’t need that so will change it to “2”
- However, I want to run those drills for “14” days instead of the default “7” so will update “Each DR-Drill duration”
- I don’t have Software Assurance on my Windows licenses so in the event of a failover I’ll need PAYG (Pay-As-You-Go) licensing, so I’ll update “Azure Hybrid Benefit…” (AHB) to “No”
- Now I want to add a new VM that wasn’t covered by the profiling task as it doesn’t yet exist, we can add that by clicking “Insert Row” and populate as required:
- Give the new VM a name…“DFNewVM”.
- We only want 1 of them.
- It’s gonna be a pretty beefy VM so we’ll select the “Standard_DS12” SKU.
- It’s also gonna be a pretty busy VM so we’ll want “Premium” storage.
- The total storage footprint of the VM will be “2048” GB.
- We’ll apply the same DR-Drill rules to our other VMs.
- Our “OS Type” is “Microsoft Windows Server 2016 (64-bit)”.
- Our “Data Redundancy” is “LRS” – This is the only supported option for Premium storage.
- Again, we have no Software Assurance on our Windows license, so “No” to AHB
- To update the indicative costs to include this new VM, click “Re-calculate cost”…OOOOFT, it about doubled our costs 🙂
NOTE: Keep in mind that any VMs you add in this way haven’t been run through the profiling task and therefore the report can’t account for IOPS/data churn, so the VM could potentially be incompatible with ASR or Azure. Testing would be up to you in this case…or just deploy the VM and re-run the profiling task if you have the time.
Here’s a quick look at what we’ve change (highlighted in the screenshot):
Let’s make one or two more changes to our worksheet. Let’s remove ExpressRoute and add a VPN Gateway:
- Under “Site to Azure Network” > “ExpressRoute” we’ll select “N/A” from the dropdown.
- And for “VPN Gateway type”, we’ll select “VpnGw1”
With those changes made we can have a look at the “Cost estimation report” so see a breakdown of the indicative costs for our ASR environment, shown for both the month and year.
These changes are also reflected in the “Cost Estimation” overview on the “Recommendations” worksheet…which is nice.
OK, so that about wraps thing up for the ASR Deployment Planner tool. In the next section of the guide I’ll be running through the ASR setup tasks including how to prepare the environment for deploying the configuration server. Hope to see you then.