To recap, below you will find the previous articles:
- ThinIO facts and figures, Part 1: VDI and Ram caching.
- ThinIO facts and figures, Part 2: The Bootstorm chestnut.
Off topic note:
two years ago at an E2EVC event, the concept behind ThinIO was born with just a mad scientist idea amongst peers.
If you are lucky enough to be attending E2EVC this weekend, David and I will be there presenting ThinIO and maybe, just maybe there will be an announcement. Our session is on Saturday at 15:30 so pop by, you won’t be disappointed.
Back on topic:
So here’s a really interesting blog post. Remote Desktop Services (XenApp / XenDesktop hosted shared) or whatever you like to call it. RDS really presents a fun caching platform for us, as it allows us to deal with a much higher IO volume and achieve deeper savings.
We’ve really tested the heck out of this platform for how we perform on Microsoft RDS, Horizon View RDS integration and Citrix XenSplitPersonality with Machine Creation Services.
The figures we are sharing today are based on the following configuration and load test:
- Citrix XenDesktop 7.6
- Windows Server 2012 r2
- Citrix User Profile Manager.
- 16gb of Ram.
- 4 vCpu.
- LoginVSI 4.1 medium workload 1 hour test.
- 10 users.
- VMFS 5 volume.
Diving straight in, lets start by looking at the volume of savings across three cache types.
Reviewing the details for a moment:
Running repetitive tests of at least 3 per cache type, we found even at the lowest entry point we would support (50mb per user) we saw phenomenal savings of over 70% on write IO.
No pressure no diamonds!
To put that into perspective, at a 512 MB cache for 10 users, our cache reached maximum capacity at the second user login. With 8 users still left to login, cache full and still an hours worth of load testing left, our ThinIO technology was under serious pressure.
This is key to why ThinIO is such a great solution. We won’t just perform great until we fill our cache, we don’t require architecture changes or care about your storage type, we have no lead times or install days, we will carry on to work with what is available to use, to take a large ammount of pressure off storage IOPS and data throughput.
With the figures above, you can see just how well the intelligence behind our cache can scale even when it faces such a steep workload.
Below you will find a breakdown of each test:
512 MB cache:
Breaking down into the figures, on the 512mb cache test, it’s clear to see just how well ThinIO deals with the tiniest of caches:
When we side by side this with our baseline averages, you can see we take a huge chunk out of that Spiky login pattern and continue to reduce the steady state IO as the test continues:
So lets move up and see how we get on!
1024 mb cache:
Doubling up our cache size we see a great increase in both read and write savings as you’d expect.
With 100mb of cache per user, and the average user profile in the test 3 times that size. We are still under pressure. As we will natively favour optimisations to write IO over read, you’ll see the bulk of improvements happen in write when we’re under pressure as illustrated in this test:
With more cache available during the peak IO point, we make further savings on write:
2048 mb cache:
and at our recommended value of 200mb per user in Remote Desktop Services, the results are phenomenal! With this size, even still below the 300mb mark per user profile, the read IO gets a really good boost and the write IO saving well over the 95% mark!
But there’s more!
As i pointed out in the previous blog, IOPS are just one side of the story. A reduction of data throughput to the disk is also a big benefit when it comes to storage optimisation, and as you can see we make a big difference:
So there you have it, with ThinIO, a simple, in VM solution, you can you seriously reduce your IO footprint, boost user performance and achieve greater storage density per virtual machine or on Remote Desktop Services technology.
In the mean time:
If you would like a chance to test ThinIO pre-release, find access to the public beta below. Thank you for your time and happy testing!