Pivot3, Behind the Performance Numbers

February 1, 2018

By Mike Beevor - Sr. Manager Technical Product Marketing

With the release earlier this year of Pivot3’s Acuity Software Platform, there has been a great deal of interest around the performance results of the Acuity datapath we’d architected specifically for NVMe flash and policy-based Quality of Service (QoS), and how that’d compare to other hyperconverged infrastructure (HCI) vendors in the market that aren’t designed around NVMe.  So, we compared the results of testing done in the Pivot3 lab to results posted by a couple other HCI vendors in their technical marketing papers or blogs.

Simply put, the results are impressive.

In database workload testing, using HammerDB, a well-recognised and accepted benchmark tool, Pivot3 comfortably outperformed both Nutanix and VMware VSAN in TPM (transactions per minute) on similarly configured nodes.

Pivot3 Supports More Transactions 

The key takeaways from these results are:

  • These higher Acuity results are delivered using fewer nodes than both Nutanix and VMware, enabling a better transactions/dollar ratio for customers who adopt Pivot3. Essentially more bang for the buck.
  • These results are based on comparable hybrid systems from each vendor.
  • This was done using the minimum configuration for Acuity; 3 Nodes. Acuity scales to 16 All-SSD nodes, or 12 Hybrid Nodes.

To see how Pivot3 stacks up against Nutanix, check out this Performance Brief.

It’s not just database workloads that Pivot3 Acuity excels at.  We also test VDI extensively since it is a common HCI use case, and the results are equally remarkable:

Pivot3 Delivers Higher VDI Density

Pivot3 increased the per node VDI density by a factor of 2 for the most resource intensive users using the Login VSI test platform for user performance and density, which is the industry’s de facto VDI benchmark.

While these results are extraordinary from a pure performance aspect, the real benefit comes from a translation into commercial terms.  After all, organizations are under continued pressure to do more with less, and IT is largely coin operated in today’s market.

Pivot3 Delivers More Value

Using the DB and VDI performance data above and what is available regarding commercial street pricing on us vendors, we looked at the cost to generate results. By using Acuity, it is possible to drive the IT infrastructure cost per desktop   down to $150 (using a Knowledge Worker profile as an example), about half the price of Nutanix.  So, the Pivot3 superior performance translates into delivering IT organizations better compelling economics to help make their IT budgets go farther.

I’m positive that you have myriad questions on how we achieved the results, what we’ve learned, and what we have planned for the next round of testing.

Allow me to summarize those for you.

What to Know About Pivot3 Acuity

  • Acuity is the first HCI product to have its data path architected specifically for NVMe flash storage and is able to utilize the features and performance of this latest generation of PCIe-based NVMe SSDs and servers.
  • Acuity has pre-defined performance policies in-box that manage multiple performance attributes; IOPS, Latency and Throughput (not just the IOPS limiters of other vendors) with its advanced QoS capabilities. (Hint – IOPS limiting doesn’t make it in multiple, diverse consolidated workloads.)
  • The Acuity QoS results in the most efficient, effective use of the NVMe flash as well, optimizing the economics for the customer and ensuring that the most important applications meet their required SLAs.
  • NVMe flash SSDs deliver high IOPs, high throughout at low latency. (Hint – high Queue Depth is required for supporting a demanding, multiple mixed virtualized application workloads).
  • Other HCI vendor software stacks are optimized for the legacy SATA\SAS HDD\SSD designs of the previous generation of products and it’s not a simple re-write to optimize for NVMe.

What to Consider

  • Legacy storage needs a full reset.  NVMe is a game changer and architectures need to be optimized for it.
  • IO is rarely the problem any more, and managing IO isn’t a suitable fix.  You MUST take latency and throughput into account as well.  The only way to do this is with an advanced, dynamic QoS engine.
  • We need to start thinking in terms of new metrics – “highest IOs at sub-MS response times…” would be a good place to start given the response time users expect from critical business applications.

We will continue to share performance data as we generate it from our internal testing, working with 3rd parties, and from customer POC testing (Proof of Concept). In fact, the analyst firm Enterprise Strategy Group (ESG) just released a Lab Validation Report on Pivot3 Acuity with the performance capabilities they validated. View that report here.

Download our infographic that details the full scope of our performance testing here. 



The post Pivot3, Behind the Performance Numbers appeared first on Pivot3.


Previous Article
MIPS 2018 Barcelona – I’m glad I didn’t “MIPS” it!
MIPS 2018 Barcelona – I’m glad I didn’t “MIPS” it!

Recently, I was able to spend some time at the Milestone MIPS show in Barcelona where Pivot3 was a Platinum...

Next Article
Why NVMe Matters for HCI
Why NVMe Matters for HCI

3-part Series on NVMe technology and how Pivot3 uses NVMe to get the most out of hyperconverged infrastruct...