According to Alan Hyde, Vice President and General Manager, Enterprise Group, Hewlett-Packard Enterprise South Pacific, the past few years have seen many dramatic changes in the way we utilise IT.
“Mobile devices and the Cloud have altered both how we interact with enterprise IT systems and how we utilise back-end IT services. The resulting consumerisation of IT and the explosive growth of pay-as-you-go cloud services have led to users expecting and demanding the same simplicity and speed to deploy new applications, workloads, and users internally. Additionally, the emergence of Big Data analytics and the data deluge Big Data encompasses have put further pressure on IT resources.,’ he said.
As a result, businesses, and service providers, often find themselves in the difficult situation of trying to strike a balance between over- and under-provisioning their internal IT resources. It can be difficult to determine whether the capacity on hand is adequate to meet user service level agreements (SLAs) or for that matter to understand how much capacity, server, storage, and networking, is actually available.
What’s been tried
These challenges are often exacerbated by older technology in need of upgrading or migrating to new platforms, such as the recent discontinuation of support for Windows Server 2003, which raises concerns about the vulnerability of those servers to new threats.
Two traditional approaches have been used to address these issues. First, some choose to over-provision on-premises infrastructure to handle peaks and growth, which can lead to expensive capital equipment sitting idle most of the time. Other firms are turning to public cloud services such as Microsoft Azure to handle spikes in demand, for dev-test projects, as overflow capacity, or to just move workloads to the cloud service entirely. But governance, security, or privacy issues may demand certain workloads remain behind the firewall and on the premises.
Flexible capacity – best of both worlds
To address these many challenges, companies are increasing looking at “pay-as-you-go” flexible capacity solutions that enable IT to scale quickly to handle growth needs using the available buffer capacity without the usual long procurement process. This approach is designed to offer a myriad of benefits in process, technology, and finance. Most importantly:
Capacity on demand
Companies first undergo a joint assessment of current and anticipated demand with their technology vendor before the combination of servers, storage, networking, and software is installed based upon current demand as well as a buffer to support projected growth or peak demand. If companies use up buffer capacity, additional capacity can be added through the change management process to help adjust to meet any increased spare capacity needs beyond the initial projected amounts.
With a flexible capacity model, businesses only pay for what they use – per GB, network port, virtual machine, or server instance actually used. Monthly service-based billing does not require up-front expenditures, reserving cash for other business needs.
No forklift upgrade
If a company already has a substantial IT infrastructure, a good “pay-as-you-go” model can include existing supported multivendor systems, and provide a single pane of glass to manage capacity for all eligible resources – including on-premise and cloud assets.
For service providers, the flexible approach can potentially match cash flow to cash receipts, and gives the flexibility that supports growth or shrinkage as the business climate evolves, as well as proving enterprise class support for the IT environment.
Whether your enterprise wants to have a hybrid IT model, is looking for help regarding financial predictability for IT expenditures, or IT is becoming the services broker for the enterprise, a flexible capacity approach can help better optimise your company’s resources to better drive business growth.