Two weeks ago, in my previous blog, I invited you to consider ways in which you could initiate a “Save to Invest” program for your data center. That is, how can you save money from your current data center spend, in order to re-invest it into currently un- or -under-funded areas of your data center. Thanks to those of you reading who made some comments on Part 1 – good points were raised!
Last time, I discussed my first 3 tips, as follows:
(1) Identify, Turn Off and Remove Idle Servers
(2) Identify Un-used Enterprise Software Applications: Reduce Your Software Costs
(3) Get Rid of Dead Weight – Execute a Server Refresh
Let’s now discuss two additional savings, which in fact can in many cases result in even larger financial savings:
(4) Optimize your Software Licensing, and
(5) Avoid un-budgeted spend – Critical if you have an Unlimited License Agreement (ULA)
(4) Optimize your Software Licensing
In today’s complex world of enterprise software licensing, you can be forgiving for mis-understanding some of the licensing issues, say when you migrate an app to a more powerful server in order to gain a performance increase. It’s a classic case of the business wanting performance, IT responded and compromised the Licence Agreement. IT respond and “beefed up” the box, or created more virtual capacity or increased the performance of the cluster, and suddenly the database or app that was licensed for 16 cores is now installed in a 900 core cluster and the bill is north of $10M!!! This is a surprisingly common occurrence, especially when outsourcing part of all of your data center, or moving to the cloud. It is therefore critical that you are able to detect and action under-licensing, and avoid over-licensing. Discovery capability that discovers application components across virtual machine and multi-core server architectures is paramount.
(5) Avoid Un-budgeted Spend – Critical if you have an Unlimited License Agreement (ULA)
The software audit is a vendor initiated event where the vendor will check in detail which of their software components are deployed across your company. Software audits – or rather the “fall-out” from a vendor software audit – can result in un-budgeted enterprise software spend – especially if you have an Unlimited License Agreements (ULA) (See here or a good discussion of the issues surrounding ULAs). How do you, then, minimize, such surprises?
The other week, I attended a Gartner Webinar all about how to prepare for and deal with software audits from enterprise software vendors. Gartner reported that their ….. “clients continue to report increasingly frequent software license audits, resulting in undefended, unbudgeted and unmanaged costs”. [Source: Gartner Webinar “We Received a Software Audit Letter: What Do We Do Now?”, V. Barber, September 2015]
With proliferation of Unlimited License Agreements (ULAs), it’s not uncommon for such audits to “discover” much broader deployment of the vendor’s software than you anticipated. Or you may find examples of “under-licensing” (a compliance/contractual issue) or “over-licensing”. In many cases, ULAs mean that companies feel they don’t have to manage software usage rigorously or even worry about where software is deployed!! Also, with the complexity of ULAs, it takes a software licensing expert to understand the implications of deploying an app on, for example, multi-core servers. When people then come to the re-negotiation point of a ULA they suddenly realise they don’t know what they do have, or where it is and being used by who, on how many processor cores the app is actually being used, and so on. Negotiations don’t go well, so they sign a new ULA for more money – leading to unbudgeted “true up” charges, sometimes spiralling into the hundreds of thousands or millions of Euros/dollars/pounds/yen. Invariably unbudgeted, such audits quickly attract the (wrong type of) attention from your CFO, or at best a serious impact on your available budget.
How To Execute
It makes sense therefore to engage expert help – well prior to your audit date – to understand your exposure and put in place cost reduction exercises discussed above. Core to such an exercise, of course is deep application discovery capabilities, together with software licencing and contract expertise, in order for you to fully understand any exposure and put in place mitigation programs, well in advance of any vendor software audit. And for many of you, too busy with initiative after initiative, having access to the attendant data center expertise will be incredibly useful to translate the above list into real savings for you.
This is where Cisco Services can help – get in touch via my Twitter ID below and I can point you to the right people.
Quite often in a server refresh, workloads are moved on a like for like basis – not taking into consideration the relative performance capabilities of the new machines. In a VDI cluster, where CPU contention is the key constraint – it’s often the case that capacity is managed using vCPU ratios. As a result, based on demand profiles, it’s important to make sure instances are sized appropriately – and a new server refresh could allow for the same (or better) on less vcpus.
Software licensing is a good one – some DB vendors have become wise to reducing core counts and are now licensing based on the core count of the physical machine. Therefore, consolidating further or reducing vcpus isn’t actually reducing costs – in fact, bringing in new hardware can increase costs.
Another thing to consider is to plan future growth and model those scenarios. Quite often these projects are done on an ad-hoc basis then it’s BAU and, after a year or two, another expensive and time consuming exercise in consolidation and rationalisation. Understanding the rate of growth in workloads (across vcpu allocation and resource utilisation) and projecting that rate of growth forward gives visibility into Cluster Time To Live – which can provide early visibility on when new hardware, storage or network bandwidth is required to support that growth.
Finally, it’s key that IT and business work together – projecting expected demand volumes onto supporting infrastructure to determine if capacity is sufficient.
All these approaches lead to smarter planned spending while making sure business goals can still be met.
All good points, Scott – thanks!
indeed, deep application discovery capabilities are incredibly important for any mitigation program. Thanks for posting!