Thursday, April 19, 2018

Throughput comparison of Wasabi vs Amazon S3

With the removal of egress fees using Wasabi for S3 compatible storage is a very cost effective competitor to Amazon S3.

I wanted to verify if Wasabi could live up to their claims of being faster than S3 storage.  You can download a copy of their benchmark report  and take a look at their method and results, but I'll summarize:

  1. They claim lower latency for operations/higher number of operations per second than S3.
  2. In their testing with only 10 threads with very small objects (1MB) they claim to be 3x faster then S3
  3. For larger objects (10MB) they claim to achieve the same thoughput as S3.
I wasn't interested in their claims of lower latency/higher operations, I'm purely interested in throughput, which they didn't test in their white paper.

Wasabi has only one datacenter currently in US East. 
I set up the following tests with comparisons to S3 East and S3 West.
Each test was conducted with 40 threads per client with 40 large objects of approx. 4GB per file.
Each client had a single 10Gb NIC which limited the performance of S3 West from AWS West.
S3 is capable of much higher throughput than documented below if all traffic remains within the same data center.
  1. Upload and download throughput to/from AWS West
  2. Upload and download throughput from Azure East
  3. Multi client (3 clients) download throughput to simulate a real world deployment

Client Wasabi S3 East S3 West
EC2 West Put 222 234 451
Azure East Put 430 399 287
EC2 West Get 352 436 809
Azure East Get 495 585 387
Multi client download (3 clients) 571 927 1329

From the results and graph you can see that for a single client Wasabi is able to sustain approximately the same throughput as S3.  S3 has a significant throughput advantage over Wasabi if you are in the same AWS data center, which is expected.The result that is concerning is that Wasabi can only sustain approx. 5Gbps throughput across all clients.  i.e. if you have 10 clients they will only be able to achieve 500Mbps each, etc.   For may applications this would be acceptable, but keep this in mind before choosing Wasabi as your storage platform.It is possible that as Wasabi scales out they will scale out the platform bandwidth, but given the low cost it is also possible that this is the expected maximum performance.From a compatibility perspective Wasabi claim to be 100% compatible with S3 however they only currently implement a limited subset of the S3 user and bucket policy.  I strongly advise testing your required policies and see if they are implemented yet.Update: Wasabi responded that they throttle on a per account basis.  I was able to verify that by creating a second account I was able to achieve 915Mbps download speed with 3 clients (2 accessing one bucket and 1 accessing the other).  While this isn't useful for a single client it does at least show that the limit is in software and not a hard limitation of the platform.

Wednesday, June 18, 2014

DPM 2012 manual replica creation for Hyper-V

It seems that MS doesn't have any decent guidance on how to manually create a DPM replica for Hyper-V backups.
Consider that you have a Hyper-V host in a remote site and want to back it up over a slow WAN link.
e.g. You have a few VMs totaling hundreds of GB or even TB of data and a T1 link or similar. Replicating the initial data over the wire would take several weeks.
You can copy the VMs to disk, ship the disk to the site with the DPM server and import the data. The details how to do this are very badly documented.
 Below is what I worked out with a little help from this blog post:

Copy the data from the remote site
1. Attach a removable hard drive to a workstation or server in the source site.
2. Copy the data to be backed up remotely to the USB drive. For a VM you will need to shut down the VM and copy the entire VM folder from the host or CSV.
3. Move this removable hard drive to the destination site and attach to a workstation or server.
Import the data into DPM
1. RDP to the DPM server as a local administrator account (not a domain account)
2. Open DPM as a domain account (shift-rightclick)
3. Under Protection, All Protection groups, expand the new protection group member and under Details, Replica path click on the “Click to view details” link. Right click on the destination and copy.
4. Paste this into notepad
5. Append the full path to your data to be backed up to the end of the path starting with “N_Vol” where N is the drive letter. E.g. for my VM called TESTVM that is on a CSV called TESTCSV1 my full path was "C:\Program Files\Microsoft System Center 2012 R2\DPM\DPM\Volumes\Replica\Microsoft Hyper-V VSS Writer\vol_0c2c97b6-327a-401a-b65f-d253a607fdcd\b20116a3-a646-4480-afe9-aeb322887251\Full\\C-Vol\ClusterStorage\TESTCSV1\TESTVM"
The GUIDs in here will be specific to your config so you must copy these from what you pasted into notepad.
6. Open an elevated CMD prompt and run mkdir and paste the full path from notepad that you just created. Make sure you enclose this in “ “  since it includes a space
7. This path will most likely be too long for Windows Explorer to deal with so we need to mount this folder to a drive letter using command:
 and then paste the full path from notepad again. Substitute X for any free drive letter in your system. Make sure you enclose the path in “ “
8. Copy the VM files from the removable media to this drive.  You must replicate exactly how the file and folder structure looks on the source.
Start backing up using the copied data as the initial manual copy.
1. In DPM console under Protection, All Protection groups, expand the new protection group member.
2. Right click on the Protection Group member and select Perform Consistency check. This will perform a block level comparison of the files we manually copied with the files on the source system and copy any changed blocks over the wire.  Only the changed blocks will be copied.

Monday, June 9, 2014

Backups on Hyper-V 2012 clusters cause hosts to run out of RAM

It seems that there is a long standing issue that if you attempt to backup up a Hyper-V 2012 cluster:

Server 2012 Hyper-V cluster
VMs stored on CSVs
FEP 2010 installed on the host servers
Performing VM backups using any backup product causes the RAM on the CSV owner to be 100% used.
The amount of RAM used is equal to the regular RAM used on the host, plus the size of the VHD/VHDX currently being backed up.
You can see this issue occurring in real time during a backup, watching the used RAM will show a steadily increasing RAM usage until it hits 100%
Memory preasure causes the CSV owning node to become so slow that is becomes unresponsive, and eventually VMs will start to shut down and the cluster will eventually fail.
Tested and confirmed as an issue with DPM and Netbackup.

Microsoft is totally lost on this issue and after working on it for months is yet to find a solution.  However there does seem to be 2 work arounds for the issue.

1/ Uninstall FEP
2/ On top of the recommended antivirus exclusions for Hyper-V, exclude the processes corresponding to your backup product.
dpmra.exe (DPM process)
bpbkar32.exe  (Netbackup process)
bpfis.exe  (Netbackup process)

If you use another backup product you will need to research which processes to exclude.

Let me know in the comments if this helps you.

Tuesday, November 10, 2009

Default Gateway is deleted every reboot on Windows Vista and Windows 2008

If your default gateway for TCP/IP v4 on Windows 2008 or Windows Vista is deleted or disapears each time you reboot then you are not alone.

This is apparently a known bug in Windows 2008 SP2 and Vista SP2, so it is surprising that Microsoft has not yet fixed it.

If you have this problem here is how to fix it.
  • Open regedit (start -> run -> regedit)
  • Navigate to HKLM/System/CurrentControlSet/Services/Tcpip/Parameters/Interfaces
  • Locate the Interface that is having the issue.
  • Right click on Default Gateway and click Modify.
  • You will notice that the first line is blank and that the correct gateway is on the second line. Delete the blank line and click OK
  • Reboot and the problem is resolved.
Credit to Ilja Herlien for finding this solution, first posted here:

Tuesday, February 24, 2009

How to remove the ads from Hotmail / Live Mail (

This tip will, as of February 2009, remove all advertising from Hotmail and will also remove the right frame that is only used for ads. Even if you already have Adblock Plus installed this is still very useful as Adblock Plus can not remove the right frame.

This method works in Firefox, Flock, Thunderbird, Iceweasel, Opera, and Greasemonkey browsers. This method will not work in Internet Explorer or Safari. (Opera and Greasemoney users can skip straigth to the page to install the style)

Before (left) and after (right) screen shots showing the difference.

If you are using IE or Safari then first you must install a supported browser. I suggest Firefox.

You will also need the plugin called Stylish, which you can install from here:
(Opera and Greasemoney users can skip installing Stylish and go straigth to the page to install the style)

Once Stylish has installed you have to restart Firefox to activate it. If you are viewing this page with Firefox then don't forget to bookmark this page so you can come back here to complete the instructions.

After you have installed Stylish, you need to install the style sheet here:

simply clicking the button labeled "Load into Stylish" (Opera and Greasemoney users should click the "Load as a user script" button)
You can preview the code before you load it if you like to check what is running on your computer.

Once the new style is installed check out how Hotmail looks. All the ads and the right frame should be gone.