← Back to Blog Home

  •  Friday, January 11, 2013   
  •   Comments

2013 is here & we’re excited for what the hosting industry has to offer in the coming 12 months. We had the chance to catch up with Seann & Lorne from GridVirt over the holidays to chat a little about their SSD offering & some thoughts on the industry.

How did the idea for GridVirt come about?

Seann: I registered GridVirt Inc. in early 2010 and I’ve had the domain since 2008-2009. The idea was to build a service that excluded as many single points of failure as possible. Though I’ve worked extensively with both clustered storage as well as HA-SAN (iSCSI & FC), both have their faults. I decided the best path for both performance and reliability was local storage with remote replication in a HA-AF (high availability w/ automatic failover) setup.

Last year when SSDs really became main stream I knew it was the right time to start GridVirt. SSDs offer a level of performance and reliability that I’ve never seen with HDDs. Mission critical high performance hosting goes hand in hand with SSD storage.

Lorne: I had posted a thread in WHT looking for a partner to start a VPS/Cloud based Hosting company. I was contacted by 7-8 people over the course of a month and none of them really struck me as people I wanted to get into business with. Seann contacted me about 5 weeks after I made the post and we got along well, and it immediately became clear he really knew his stuff when it comes to the many different facets of building, maintaining, and running a hosting company.

What are common things that people use your servers for?

Lorne: We see a lot of DNS servers, Magneto, high traffic WordPress blogs, and gaming servers.

Seann: We have plans in motion to start offering shared hosting and game servers on top of our virtualization infrastructure by the end of Q1 this year. The idea is to give those that don’t have the knowledge to administer a VM the opportunity to take advantage of our high performance, high availability service.

What’s been some of the most challenging aspects of starting up your business?

Lorne: The most challenging has been carving out a spot in our niche of SSD based hosting. The hosting industry as a whole is extremely competitive in every aspect, whether it be prices, technology, infrastructure, or even keywords. Building reputation and gaining trust is especially hard with all the new hosting companies that startup than disappear leaving their customers high and dry. Coming up with something that makes us stand out also was pretty difficult and took us countless hours of testing and tweaking before we came up with a combination that would allow us to really compete at the top of the heap for pure performance and performance to price ratio.

What are some of the Pros & Cons of a pure SSD offering?

Lorne: Excellent IO performance, better overall server performance due to the elimination of the IO bottleneck, and a smile like a kid in a candy shop whenever you do anything even remotely resource intensive and see just how fast it performs, would definitely be the Pros.

For Cons I really can think of only one and that would be a smaller amount of storage per $ compared to traditional platter drive based hosting. We try to compensate for this with our SAN storage service for those that need the extra storage space.

Tell us a little bit about the difference between a normal & performance optimized node?

Seann: The first thing we do is remove the IOPS and MB/s limits as we provision all VMs with a 60,000 IOPS 600MB/s soft limit. This adds about 200-300MB/s and 20,000 IOPS. Next we enable writeback caching on the VMs storage volume(s). This gives a 50-80% jump in performance at the cost of system memory. For the last step we manually go into the client’s VM and tweek the kernel, drivers, and make sure that noop is set as the IO Scheduler. This on average adds an additional 5-10% overall performance.

Side note: Some will balk at the idea of writeback cache but in reality with redundant power supplies and a DC that has a N+2 power infrastructure and a 100% SLA power loss isn’t likely and unlike a Non-BBU protected hardware RAID controller you don’t see data corruption within a KVM RAM cached VM in the event of power loss. Worst case scenario any data in the RAM cache is simply not written out to the SSDs.

1.2Gb/s is fast, for typical real world usage it’s possibly unnecessary, right? Do you think speeds that fast benefit the host more than the actual user? (i.e. it lets you fit more users on a node).

Seann: Most don’t and probably won’t ever need that kind of performance on a sustained basis but having it available when needed allows a VM to easily jump the hurdle of an I/O bottleneck. We’ve tested VMs of the same spec on hardware nodes of the same spec with both traditional HDD and SSD storage. The SSD VMs easily outperform the HDD VMs by a margin of 2-3 at most tasks. In the end it’s all about how fast you can move the data.

If you really want to know the secret to absolute performance get a VM with a lot of RAM and put your data on a RAM disk. It’s actually not that complicated but if that was an affordable option everyone would be doing it. Until you can fill a server with 500GB+ of RAM for $1/GB SSDs are the next best thing.

Though it would be more profitable to pack a hardware node full of VMs, in reality we have a set limit per hardware node. This limit is a combination of both resource and performance availability. We test our hardware nodes and set this limit to ensure that the numbers we publish are easily replicated. It’s against both the customer’s and our interest to oversell resource and performance availability in any way.

What’s your thoughts on traditional drives with SSD caching?

Seann: It’s actually the best option for large volume high performance storage. We utilize SSD caching on our ZFS based SAN servers. The down side is that SSD caching only works the second time the data is read. In reality the overall performance gain is about 5-10x that of HDD only storage but that is still less than 1/10th the performance of pure SSD storage.

The numbers:

2x SSD RAID-0 (Cache) + 6x HDD RAID-50 (Storage) = 8k-20K IOPS
5x SSD RAID-0 (1 hot spare) = 200K+ IOPS

You also offer ZFS traditional storage via SAN, do you think it’s important for SSD hosts to still cater to those clients who need more storage, than is perhaps economically viable with just SSDs alone?

Lorne: Absolutely. Everything is getting bigger on the net and the option for additional non SSD storage needs to be available to those clients that have need of it.

Not many SSD hosts offer Unmetered bandwidth, what’s the economics of this? What sort of uplink do you need to sustain your current usage?

Seann: It’s simple we limit port speed rather than total data transfer. If the port is too slow upgrades are very cheap considering the amount of data you can get through the port. Some people still want the option to burst to higher speeds so we will be offering both as an option by the end of Q1 2013. We will still be limiting the port to 200/1000Mbps Upload/Download max to keep port contention low.

The hardware nodes currently have 1-2 Gigabit uplinks (depends on usage) and the switches have 2-4 10-Gigabit uplinks. It all depends on usage and we do our best to keep the data moving fast.

What are you guys working on in 2013?

Lorne: We have lots planned for 2013; it will be an exciting year for us! We are rolling out a new site, and billing system, new datacenter locations, a gaming division, shared hosting, open-source project support, a NPO (non-profit organization) program and revised plans that offer more flexibility and an even better price:resource:performance ratio than what we currently offer. We are upgrading and currently testing on new hardware nodes. The first round of testing and benchmarking has been very exciting and we look forward to getting these even faster nodes into production. We will be adding shared hosting and game servers to our service offering and best of all, and highly requested, we will also be expanding into Europe by the end of Q1 2013.

What’s your thoughts on where the industry is heading this year?

Seann: I think we will see a continued shift from both the shared and dedicated markets to virtualization. Virtualization, when done right, allows for one less obstacle before the objective. As this shift continues I see a much larger need for APP/single purpose operating systems. Base operating system distributions will always have their place but the ability to simply install an OS and have everything you need is key. We have deployed all of the TurnKey Linux APPs on our ISO-Store. This year we will also be taking requests and building some APPs of our own on top of TurnKey Core for our upcoming Game Server service.

If you could host any website in the world on GridVirt, what do you think would give you guys a decent technical challenge?

Seann: We host VMs and the real technical challenge is keeping those VMs online 100% of the time. I like many others stand by the belief that it isn’t a cloud without automatic failover. Because we use local SSD storage the only way to recover from a catastrophic hardware failure is to restore from our SAN replication/backup servers. Though being much faster than a manual recovery it still takes 30-45 minutes (about 10GB/Minute) to complete. It’s just a tradeoff we must make in order to enjoy the performance of local SSD storage.

Failure is rare but we’ve seen our failover system in action twice last year and both times it was a nail biting experience. All I could do was wait and hope that everything worked as it should. It worked out perfect but I’m extremely paranoid about it so we keep weekly backups of our SAN servers offsite just in case all else fails. It never hurts to keep backups of your backups. ;)

What’s your view on benchmarking in general, do you think it has its place to help hosts get exposure & win customers?

Lorne: I think it does have a place and is needed. It is amazing how word travels in the Hosting community and a host that consistently shows great performance and stability will gain exposure and in turn see an increase in customers. That being said I think people need to take benchmarks with a grain of salt and not base their decision to use a host solely because they have great benchmarks.

It is common practice for hosts that know people who are going to run benchmarks to put those people on a low populated node in order to influence the results. People interested in choosing a host because of benchmarks need to look at benchmarks from many different people over a long period of time for a truer picture, something that ServerBear is doing a great job of providing.

View GridVirt’s Plans & Benchmarks