As companies experience the opex and capex benefits of hyperconverged infrastructures (HCI), IT is entering the next phase of hyperconvergence. This is characterized by deployment across a wider range of application workloads, including mission critical and multiple, diverse application workloads. This is creating an increased demand for solutions that offer predictable application performance to guarantee the business results, along with the expected simple deployment and economic advantages of HCI.
When preparing to modernize the data center with HCI, one of the capabilities organizations should evaluate is quality of service (QoS). QoS capabilities will enable IT to confidently support multiple application workloads, including mission critical, on a single HCI system while ensuring required performance and data protection to the applications that matter most to the business. QoS addresses one of the biggest performance barriers in highly virtualized environments: resource contention. Multiple virtual machines (VMs) competing for available resources makes consistent, predictable performance difficult to achieve, resulting in end user dissatisfaction and loss of confidence in IT’s ability to provide reliable service.
Source: Thinkstock / cybrain
Conventional QoS Is limited
Not all applications carry the same level of importance to a business. For example, meeting performance and data protection SLAs for an organization’s customer order database is likely more critical than an internal inventory database. Traditionally, storage quality of service (QoS) has had limited capabilities, enabling IT to just set static performance minimums and maximums for workloads and use performance monitoring to determine the minimum number of IOPs a VM needs to perform at an acceptable level.
But setting limits and bandwidth usage caps for high-demand VMs isn’t always enough and is complex to manage. If system performance is available but an application can’t access it, the system performance you paid for can’t be utilized. To ensure you get the maximum utilization out of the system you purchased, QoS must also have the ability to dynamically prioritize certain workloads over others when there are points of contention, along with the ability to easily set performance targets. Incorporating data protection policies into the QoS architecture also broadens the ability to ensure SLAs for mission critical applications are properly met.
If system performance is available but an application can’t access it, the system performance you paid for can’t be utilized.
A dynamic QoS solution should be able to prioritize instantaneously, automate data movement in real time, and also schedule QoS setting changes from a LUN/datastore level down to an individual VM/VMDK. This significantly reduces the time it takes to manage performance and data protection to meet application SLAs, even as they change.
Dynamic QoS for predictable performance
The only way to truly mitigate the impact of resource contention on critical business applications and maximize utilization of system resources is through QoS with the following dynamic capabilities:
- Policy optimization
Built-in, easily-assigned policies define and set performance targets for each workload to manage minimum IOPs, throughput, and maximum latency. Policies can be assigned when a volume or VM is created and updated on-the-fly as business needs change. By automating policy changes, IT has the necessary agility to support the business as application priorities change.
- Prioritize what matters most
Achieving service levels for each application workload type is automatically governed by the pre-defined policies the user assigns, so the system knows how it should maintain each policy target. In a resource contention scenario, for example, QoS policy 1 (Mission-Critical) will be maintained by prioritizing its I/O requests over Non-Critical workloads first, and Business-Critical workloads, if necessary.
These same policies and associated service levels also control where data is stored in real-time, whether RAM, PCIe/NVMe flash, SSD, or HDD. With system caching and tiering algorithms directly tied to automated policies and prioritized resources, IT is able to ensure that the right data is placed in the appropriate storage medium to deliver on specified performance targets.
To support more advanced data protection, policies can be defined by specifying snapshot, replication and retention and applying them to volumes or groups of volumes. Just as with performance, mission-critical volumes will receive priority for their data protection operations. Scheduling policies enable IT to set pre-defined schedules for both performance and data protection policies. With this automation, IT can support greater agility as application priorities and workloads change.
Dynamic QoS in a software defined data center
As data centers modernize to be more agile, efficient and economic, policy-based management of software-defined infrastructures is key. Dynamic QoS for HCI will serve as a vital application-centric, policy-based management platform to expand the simplicity and economic benefits it delivers. This is accomplished by ensuring SLAs are met for the most important applications that drive the business, while at the same time ensuring maximum utilization of the infrastructure. Businesses are moving quickly into the world of HCI and software-defined data centers, and QoS technology must evolve from static to more dynamic implementations to meet the needs of IT.
Mike Koponen, is senior director of marketing at Pivot3
Sourced from: http://www.datacenterdynamics.com/content-tracks/design-build/hyperconverged-systems-need-dynamic-qos/97468.article