Are You Ready For z Systems Workload Pricing for Cloud (zWPC) for z/OS?

Recently IBM announced the z Systems Workload Pricing for Cloud (zWPC) for z/OS pricing mechanism, which can minimize the impact of new Public Cloud workload transactions on Sub-Capacity license charges.  Such benefits will be delivered where higher Public Cloud workload transaction volumes may cause a spike in machine utilization.  Of course, if this looks familiar and you have that feeling of déjà vu, this is a very similar mechanism to Mobile Workload Pricing (MWP)…

Put simply, zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms, for the usual MLC software suspects, namely z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public Cloud workloads are defined as transactions processed by named Public Cloud applications transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, et al.

As per MWP, SCRT calculates the R4HA for Public Cloud transaction GP MSU resource usage, subtracting 60% of those values from the traditional Sub-Capacity software eligible MSU metric, with LPAR granularity, for each and every reporting hour.  The software program values for the same hour are aggregated for all Sub-Capacity eligible LPARs, deriving an adjusted Sub-Capacity value for each reporting hour.  Therefore SCRT determines the billable MSU peak for a given MLC software program on a CPC using the adjusted MSU values.  As per MWP, this will only be of benefit, if the Public Cloud originated transactions generate a spike in the current R4HA.

One of the major challenges for implementing MWP was identifying those transactions eligible for consideration.  Very quickly IBM identified this challenge and offered a WorkLoad Manager (WLM) based solution, to simplify reporting for all concerned.  This WLM SPE (OA47042), introduced a new transaction level attribute in WLM classification, allowing for identification of mobile transactions and associated processor consumption.  These Reporting Attributes were classified as NONE, MOBILE, CATEGORYA and CATEGORYB.  Obviously IBM made allowances for future workload classifications, hence it would seem Public Cloud will supplement Mobile transactions.

In a previous z/OS Workload Manager (WLM): Balancing Cost & Performance blog post, we considered the merits of WLM for optimizing z/OS software costs, while maintaining optimal performance.  One must draw one’s own conclusions, but there seemed to be a strong case for WLM reporting to be included in the z/OS MLC Cost Manager toolkit.  The introduction of zWPC, being analogous to MWP, where reporting can be simplified with supplied and supported WLM function, indicates that intelligent and proactive WLM reporting makes sense.  Certainly for 3rd party Soft-Capping solutions, the ability to identify MWP and zWPC eligible transactions in real-time, proactively implementing MSU optimization activities seems mandatory.

The Workload X-Ray (WLXR) solution from zIT Consulting delivers this WLM reporting function, seamlessly integrating with their zDynaCap and zPrice Manager MSU optimization solutions.  Of course, there is always the possibility to create your own bespoke reports to extract the relevant information from SMF records and subsystem diagnostic data, for input to the SCRT process.  However, such a home-grown process will only work on a monthly reporting basis and not integrate with any Soft-Capping MSU management, which will ultimately control z/OS MLC costs.

In conclusion, from a big picture viewpoint, in the last 2 years or so, IBM have introduced several new Sub-Capacity pricing mechanisms to help System z Mainframe users optimize z/OS MLC costs, namely Mobile Workload Pricing (MWP), Country Multiplex Pricing (CMP) and now z Systems Workload Pricing for Cloud (zWPC).  In theory, at least one of these new pricing mechanisms should deliver benefit to the committed System z user, deploying this server for strategic and Mission Critical workloads.  With the undoubted strategic importance associated with Analytics, Blockchain, Cloud, DevOps, Mobile, Social, et al, the landscape for System z workloads is rapidly evolving and potentially impacting those sacrosanct legacy Mission Critical workloads.  Seemingly the realm of possibility exists that Cloud and Mobile originated transactions will dominate access to System z Mainframe System Of Record (SOR) data repositories, which generates a requirement to optimize associated MLC costs accordingly.  Of course, for some System z users, such Cloud and Mobile access might not be on today’s to-do list, but inevitably it’s on the horizon, and so why not implement the instrumentation ability ASAP!

The IBM Mainframe: A Several Year Hardware Refresh Cycle?

Typically a new generation of IBM Mainframe server is released every three years or so, along with a number of function and performance upgrades.  In 2003, IBM released their Mainframe Charter that included a statement:

IBM lowered MSU values incorporated in the z990 microcode by approximately 10 percent, resulting in IBM software savings for IBM zSeries software products with MSU-based pricing.  These reduced MSUs do not indicate a change in machine performance. Superior performance and technology within the z990 has allowed IBM to provide improved software prices for key IBM zSeries operating system and middleware software products.

This terminology was named by some as the “Technology Dividend” where put simply, when upgrading IBM Mainframe servers, users would benefit from a ~10%+ software price versus performance benefit.  However, the z10 server model was the last IBM Mainframe series that benefitted from this hardware CPU chip related performance benefit.  Subsequent IBM Mainframe models have compensated for this slowing of hardware performance increase, by compensating with AWLC and AEWLC pricing models.  Therefore, unless your business has an absolute need for the “latest and greatest” IBM Mainframe server hardware, the realm of possibility exists that your business can extend the useful and cost efficient lifetime of your IBM Mainframe asset beyond the typical three year period…

As we all know, with every IT platform, there is a strong correlation between server hardware and associated Operating System.  Arguably the IBM Mainframe server has the best compatibility attribute, where there are many server hardware and Operating System interoperability scenarios.  A recent Statement Of Direction (SOD) for z/OS states:

Going forward, IBM intends to make new z/OS and z/OSMF releases available approximately every two years. Such a schedule would be intended to provide you with sufficient time to plan for new releases and to leverage them for the most business value. In addition, beginning with z/OS Version 2, IBM plans to provide five years of z/OS support, with three years of optional, fee-based extended service (5+3) as part of the new release cadence. Beginning with z/OSMF Version 2, IBM also plans to provide five years of z/OSMF support. However, similar to z/OSMF Version 1, optional extended service is not planned to be available for z/OSMF Version 2.

In addition, in z/OS V2.1, IBM plans to further leverage enhancements in the current IBM mainframe servers and storage control units. z/OS V2.1 is planned to IPL only on System z9 and later servers. Also, z/OS Version 2 is planned to require 3990 Model 3 (3990-3), 3990 Model 6 (3990-6), and later storage control units.

In attempt to simplify this scenario, in theory an IBM Mainframe customer could benefit from 5 years z/OS Version 2 support, with an IBM z9 or newer server.  In addition, this support could be extended for a further 3 years, for an extended service fee.  Therefore, from a software support perspective, there are no tangible cost considerations for extending the asset life of an IBM Mainframe from a 3 to 5 year cycle.

We must then consider the End of Marketing (EOM), also known as Withdrawal From Marketing (WDFM) and End Of Service (EOS) life cycles for the IBM Mainframe Server (Hardware).  Once again, when compared to other non-Mainframe platforms, the IBM Mainframe Server demonstrates an arguably unparalleled support cycle, where in the last 20 years or more, an average of 4.2 years sales and service, supplemented by an additional average of 7.1 years additional service applies.  Once again, as per z/OS Operating System support, the realm of possibility exists for extending the typical 3 year hardware refresh cycle to 5 years or longer.

When considering IBM Mainframe server hardware provision and support, there is one subtle difference that is not necessarily obvious, especially for those organizations that refresh their IBM Mainframe server every 3 years or so.  Clearly and stating the obvious, only IBM or a highly certified IBM System z Business partner can supply a latest generation IBM System z server or field upgrade option.  Conversely, there are a higher number of certified organizations that can provide IBM Mainframe hardware support services, allowing for a competitive and healthy 3rd party market for these services.  Additionally these companies also maintain inventories of equipment and have access to Microcode and Firmware upgrades that offer a possibility for performing field upgrades of EOM/WDFM servers.  One such company with a longevity and good track record of providing these value-added IBM Mainframe services from The United Kingdom is Blue Chip Customer Engineering.  As per any other competitive market place, arguably each and every IBM Mainframe user might consider obtaining a comparative hardware support services quotation for their business, whether they’re using the current latest and greatest IBM System z server model, or a slightly older (E.g. 4-8+ Years) model.

In conclusion, there are always options for the cost savvy business to reduce costs.  In the IBM Mainframe environment, soft capping via standard IBM Defined Capacity (DC) or Group Capacity Limit (GCL) function is an option, intelligent soft capping via a 3rd party product such as zDynaCap might be an option, or leveraging from the latest Absolute Capping IBM feature also applies.  Moreover, exploring the 3rd party hardware support services market might prove to be a very simple and commercial exercise that could decrease IBM Mainframe TCO, while extending asset life accordingly.

z/OS Workload Manager (WLM): Balancing Cost & Performance

A sophisticated mechanism is required to orchestrate the allocation of System z resources (E.g. CPU, Memory, I/O) to multiple z/OS workloads, requiring differing business processing priorities. Put very simply, a mechanism is required to translate business processing requirements (I.E. SLA) into an automated and equitable z/OS performance manager. Such a mechanism will safeguard the highest possible throughput, while delivering the best possible system responsiveness. Ideally, such a mechanism will assist in delivering this optimal performance, for the lowest cost; for z/OS, primarily Workload License Charges (WLC) related. Of course, the Workload Management (WLM) z/OS Operating System component delivers this functionality.

A rhetorical question for all z/OS Performance Managers and z/OS MLC Cost Managers would be “how much importance does your organization place on WLM and how proactively do you manage this seemingly pivotal z/OS component”? In essence, this seems like a ridiculous question, yet there is evidence that suggests many organizations, both customer and ISV alike, don’t necessarily consider WLM to be a fundamental or high priority performance management discipline. Let’s consider several reasons why WLM is a fundamental component in balancing cost and performance for each and every z/OS environment:

  • CPU (MSU) Resource Capping: Whatever the capping method (I.E. Absolute, Hard, Soft), WLM is a controlling mechanism, typically in conjunction with PR/SM, determining when capping is initiated, how it is managed and when it is terminated. Therefore from a dispassionate viewpoint, any 3rd party ISV product that performs MSU optimization via soft capping mechanisms should ideally consider the same CPU (E.g. SMF Type 70, 72, 99) instrumentation data as WLM. Some solutions don’t offer this granularity (E.g. AutoSoftCapping, iCap).
  • MLC R4HA Cost Management: WLM is the fundamental mechanism for controlling this #1 System z software TCO component; namely WLM collects 48 consecutive metric CPU MSU resource usage every 5 Minutes, commonly known as the Rolling 4 Hour Average (R4HA). In an ideal world, an optimally managed workload that generates a “valid monthly peak”, will fully utilize this “already paid for” available CPU MSU resource for the remainder of the MLC eligible month (I.E. Start of the 2nd day in a calendar month, to the end of the 1st day in the next calendar month). More recently, Country Multiplex Pricing (CMP) allows an organization to move workloads between System z server (I.E. CPC) structures, without cost consideration for cumulative R4HA peaks. Similarly, Mobile Workload Pricing (MWP) reporting will be simplified with WLM service definitions in z/OS 2.2. Therefore it seems prudent that real-time WLM management, both in terms of real-time reporting and pro-active decision making makes sense.
  • System z Server CPU Management: As System z server CPU chips evolve (E.g. CPU Chip Cache Hierarchy and Relative Nest Intensity), there are complementary changes to the z/OS Operating System management components. For example, HiperDispatch Mode delivers CPU resource usage benefit, considering CPU chip cache resources, intelligently allocating workload to as few logical processors as possible. It therefore follows that prioritization of workloads via WLM policy definitions becomes increasingly important. In this instance one might consider that CPU MF (SMF Type 113) and WLM Topology (SMF Type 99) are complementary reporting techniques for System z server design and management.

Since its announcement in September 1994 (I.E. MVS/ESA Version 5), WLM has evolved to become a fully-rounded and highly capable z/OS System Resources Manager (SRM), simply translating business prioritization policies into dynamic function, optimizing System z CPU, Memory and I/O resources. More recently, WLM continues to simplify the management of CPU chip cache hierarchy resources, while reporting abilities gain in strength, with topology reporting and the promise of simplified MWP reporting. Moreover, WLM resource management becomes more granular and seemingly the realm of possibility exists to “micro manage” System z performance, as and if required. Conversely, WLM provides the opportunity to simplify System z performance management, with intelligent workload differentiation (I.E. Subsystem Enclave, Batch, JES, USS, et al).

Quite simply, IBM are providing the instrumentation and tools for the 21st Century System z Performance and Software Cost Subject Matter Expert (SME) to deliver optimal performance for minimal cost. However, it is incumbent for each and every System z user to optimize software TCO, proactively implementing new processes and leveraging from System z functions accordingly.

Returning to that earlier rhetorical question about the importance of WLM; seemingly its importance is without doubt, primarily because of its instrumentation and management abilities of increasingly cache rich System z CPU chips and its fundamental role in controlling CPU MSU resource, vis-à-vis the R4HA.

Although IBM will provide the System z user with function to optimize system performance and cost, for obvious commercial reasons IBM will not reduce the base cost of System z MLC software. However, recent MLC pricing announcements, namely Country Multiplex Pricing (CMP), Mobile Workload Pricing (MWP) and Collocated Application Pricing (zCAP) provide tangible options to reduce System z MLC TCO. Therefore the System z user might need to consider how they can access real-time WLM performance metrics, intelligently combining this instrumentation data with function to intelligently optimize CPU MSU resource, managing the R4HA accordingly.

Workload X-Ray (WLXR) from zIT Consulting simplifies WLM performance reporting, enabling users to drill down into the root cause of performance variances in a very fast and easy way. WLXR assists in root cause problem determination by zooming in, starting from a high level overview, going right down to detailed Service Class performance information, such as the Performance Index (PI), showing potential bottleneck situations during peak time. Any system overhead considerations are limited, as WLXR delivers meaningful real time information on a “need to know” basis.

A fundamental design objective for WLXR is data reduction, only delivering the important information required for timely and professional workload management. Straight to the point information instead of data overload, sometimes from a plethora of data sources (E.g. SMF, System Monitors, et al). WLXR incorporates the following easy-to-use functions:

  • Simplified Data Collection & Storage: Minimal system overhead TCP/IP based agents periodically (E.g. 5, 15, 60 Minutes) collect CPU (Type 70) and WLM (Type 72) data. Performance data is stored centrally in near real-time, building a historical repository with intelligent analytics for meaningful information presentation.
  • Intelligent GUI Based Information Presentation: Meaningful decision based reports and graphs detailing CPU (E.g. MSU, R4HA, Weight) and WLM (E.g. Service Class, Performance Index, Response Time, Transaction Workflow) resource usage. A drill-down design provides a granularity of data presentation, for Management Summary to 3rd Level Technical Diagnostics use.
  • Corporate Identity Branding: A modular template design, allowing for easy corporate identity branding, with flexibility to easily add additional reports, as and if required.

Without doubt, WLM is a significant z/OS System Resources Management function, simplifying the translation of business workload requirements (I.E. Service Level Agreement) into timely and proactive allocation of major System z hardware resources (I.E. CPU, Memory, I/O). This management of System z resources has been forever thus for 20+ years, while WLM has always offered “software cost control” functionality, working with the various and evolving CPU capping techniques. What might not be so obvious, is that there is a WLM orientated price versus performance correlation, which has become more evident in the last 5 years or so. Whether Absolute Capping, HiperDispatch, Mobile Workload Pricing, Country Multiplex Pricing or evolving Soft Capping techniques, the need for System z users to integrate z/OS MLC pricing considerations alongside WLM performance based management is evident.

Historically there was not a clear and identified need for a z/OS Performance/Capacity Manager to consider MLC costs in their System z server designs. However, there is a clear and present danger that this historic modus operandi continues and there will only be one financial winner, namely IBM, with unnecessarily high MLC charges. Each and every System z user, whether large or small, can safeguard the longevity of their IBM Mainframe platform by recognizing and deploying proactive and current System z MLC cost management processes.

All too often it seems that capping can be envisaged as punitive, degrading system performance to reduce System z MLC costs. Such a notion needs to be consigned to history, with a focussed perspective on MSU optimization, where the valuable and granular MSU resource is allocated to the workload that requires such CPU resource, with near real-time performance profiling. If we perceive MSU optimization to be R4HA based and that IBM are increasing WLM function to assist this objective, CPU capping can be a benefit that does not adversely impact performance. As previously stated, once a valid R4HA peak has occurred, that high MSU watermark is available for the remainder of the MLC billing period. Similarly at a more granular level, once a workload has peaked and its MSU usage declines, the available MSU can be redirected to other workloads. With the introduction of Country Multiplex Pricing, System z users no longer need to concern themselves about creating a higher R4HA peak, when moving workloads between System z servers.

Quite simply, from the two most important perspectives, performance and cost optimization, WLM provides the majority of functionality to assist System z users get the best performance for the lowest cost. Analytics based products like Workload X-Ray (WLXR) assist this endeavour, analysing WLM data in near real-time from a performance and MLC cost perspective. It therefore follows that if this important information is also available for sophisticated MSU optimization solutions, which consider WLM performance (E.g. zDynaCap, zPrice Manager), then proactive performance and cost management follows. It’s hard to envisage how a fully-rounded MSU optimization decision can be implemented in near real-time, from an MSU optimization solution that does not consider WLM performance metrics…

System z MLC Pricing Increases: Look After The Pennies…

Recently IBM announced ~4% price increases in z/OS Monthly License Charges (MLC) for selected Operating System and Middleware software programs and associated features. Specifically, price increases will apply to the VWLC, AWLC, EWLC, AEWLC, PSLC, FWLC and TWLC pricing metrics. Notably, SDSF price increases will be ~20% with Advanced Function Printing (AFP) product price increases of ~13-24%. In a global economy where inflation rates for The USA and Western Europe are close to 0%, one must draw one’s own conclusions accordingly. Lets’ not forget that product version changes typically have an associated price increase. From a contractual viewpoint, IBM only have to provide 90 days advance notice for such price changes, in this instance, IBM provided 150+ days advanced notice.

Price increases are inevitable and as always, it’s better to be proactive as opposed to reactive to such changes. As always, the old proverbs always make good sense and in this instance, “look after the Pennies and the Pounds will look after themselves”! This periodic IBM price increase is inevitable, but is not the underlying issue for controlling System z software costs. For many years, since 1994 to be precise, when IBM introduced Parallel Sysplex License Charges (PSLC), the need for IBM Mainframe users to minimize MSU usage has been of high if not critical importance. Nothing has changed in this 20+ year period and even though IBM might have introduced Sub-Capacity and specialty engines to minimize chargeable MSU usage, has each and every System z user optimized their MSU usage? Ideally this would not be a rhetorical question, rather being a “Golden Rule”, where despite organic CPU capacity increases of ~10% per annum, a System z environment could maintain near static IBM MLC software costs.

I have written several blog entries and presented on this subject matter over the years, for example:

The simple bottom line is that System z MLC software accounts for ~20-35% of the overall System z TCO, typically being the #1 expenditure item. For that reason alone, it’s incumbent for each and every System z user to safeguard they have the technical and commercial skills in place to manage this cost item, not as an afterthought, but inbuilt into each and every System z process, from application design, through to that often neglected afterthought, application tuning.

Many System z organizations might try to differentiate between a nuance of System and Application tuning, but such a “not my problem” type attitude is not acceptable and will be imposing a significant financial burden on each and every organization.

A dispassionate and pragmatic approach is required for optimizing System z CPU usage. In this timeframe, let’s examine the ~20% SDSF price increase. IBM will quite rightly state that in conjunction with their z/OS 2.2 release, there are significant SDSF product function advancements, including zIIP offload, REXX interoperability and increased information delivery. However are such function improvements over and above the norm and not expected as a Business As Usual (BAU) product improvements, which should be included in the Service & Support (S&S) or Monthly License Charges (MLC) paid for software?

In October 2013 I wrote a blog entry; Mainframe ISV Software: Is Continuous Product Improvement Always Evident? The underlying message was that an ISV should deliver the best product they can, for each and every release, without necessarily increasing software costs. In this particular instance, the product was an SDSF equivalent, namely (E)JES, which many years ago delivered all of the function incorporated in SDSF for z/OS 2.2, but for a fraction of the cost…

As of 1 November 2015, IBM will start billing cycles for Country Multiplex Pricing (CMP), which requires the October 2015 version of SCRT, namely V23R10. A Multiplex is defined as a collection of all System z servers in one country, measured as one System z server for software sub-capacity reporting. Sub-Capacity program utilization peaks across the Multiplex will be measured, as opposed to separate peaks by System z servers. CMP also provides the flexibility to move and run workloads anywhere with the elimination of Sysplex aggregation pricing rules.

Migrating to CMP is focussed on CPU capacity growth and flexibility going forward. Therefore System z users should not expect price reductions for their existing workloads upon CMP deployment. Indeed there are CMP deployment considerations. A CMP MSU baseline (base) needs to be established, where this MSU Base and associated MLC Base Factor is established for each sub-capacity MLC product and each applicable feature code. These MSU and MLC bases represent the previous 3 Month averages reported by SCRT before commencing CMP. Quite simply, to gain the most from CMP, the System z user must safeguard that their R4HA for each and every MLC product is optimized, before setting the CMP baseline, otherwise CMP related cost savings going forward are likely to be null.

From a very high-level management viewpoint, we must observe that IBM are a commercial organization, and although IBM provide mechanisms for controlling cost going forward, only the System z user can optimize System z MLC cost for their organization. Arguably with CMP, Soft-Capping isn’t a consideration, it’s mandatory.

Put very simply, each and every System z user can safeguard that they look after the Pennies (Cents) and the Pounds (Euros, Dollars) will look after themselves by paying careful attention to System z MLC software costs. Setting a baseline of System z MLC costs is mandatory, whether for the first time, or to set a new baseline for CMP deployment. Maintaining or lowering this System z MLC cost baseline should or arguably must be the objective going forward, even when considering 10% organic CPU growth, each and every year. System z decision-makers and managers must commit to such an objective and safeguard the provision of adequately skilled personnel to optimize such a considerable TCO cost line item (I.E. MLC @ ~20-35% of System z TCO). In an ecosystem with technical resources including DBA, Systems Programmer, Capacity Planner, Application Personnel, Performance Tuning, et al, why wouldn’t there be a specialist Software Cost Manager?

Let’s consider how even an inexperienced System z user can maintain a baseline of System z MLC costs, even with organic CPU capacity growth of 10% per annum:

  • System z Server Upgrade: Higher specification CPU chips or Technology Transition Offering (TTO) pricing metrics deliver 10%+ cost per MSU benefits.
  • System z Specialty Engines: Over time, more and more application workload can be offloaded to zIIP processors, with no sub-capacity MLC software charges.
  • System z Software Version Upgrades: Major subsystems such as CICS, DB2, IMS, MQSeries and WebSphere deliver opportunity to lower cost per MSU; safeguard such function exploitation.
  • Application Tuning: Whether SQL, COBOL, Java, et al, or the overall I/O subsystem, safeguard that latest programming techniques and I/O subsystem functions are exploited.
  • New Application Deployment: As and when possible, deploy new or convert existing workloads to benefit from the optimal MLC pricing metric; previously zNALC, nowadays zCAP.
  • Technical & Commercial Skills Currency: Safeguard personnel have the latest System z software pricing knowledge, ideally from an independent 3rd party such as Watson & Walker.

In conclusion, as householders we have the opportunity to optimize our cost expenditure, choosing and switching between various major cost items such as financial, utility and vehicle products. As System z users, we don’t have that option, only IBM provide System z servers and associated base architecture, namely the most expensive MLC software products, z/OS, CICS, DB2, IMS and WebSphere/MQ. However, just as we manage our domestic budgets, reducing power usage, optimizing vehicle TCO and getting more bang from our buck for financial products various, we can and must deliver this same due diligence for our System z MLC TCO. With industry averages of ~$500-$1000 per MSU for z/OS MLC software and associated annual expenditure measured in many millions, why wouldn’t any System z user look to deliver 10%+ cost per MSU optimization, year-on-year for their organization?

Clearly the cost of doing nothing in this instance, is significant, measured in magnitudes of millions, each and every year. Hence for System z MLC TCO optimization, looking after the Pennies is more than worthwhile, while the associated benefit of the Pounds, Euros or Dollars looking after themselves is arguably priceless.

z/OS Soft Capping: Balancing Cost & Performance

Historically each and every LPAR was assigned a Relative Weight value; where a more meaningful description would be the initial processing weight. This relative weight value is used to determine which LPAR gains access to resources, where multiple LPARs are competing for the same resource. Being unit-less is one minor challenge of the relative weight value, meaning that it has no explicit CPU capacity or resource value. Typically installations would use a simple multiple of ten metric, most likely 1000, and allocate weights accordingly (E.g. 600=60%, 300=30%, 10=10%, et al). Therefore during periods of resource contention, PR/SM would allocate resources to the requisite LPAR, based upon its relative weight.

Using relative weight to classify all LPARs as equal, at least from a generic class viewpoint, does have some considerations; primarily differentiating between Production and Non-Production workloads. Restricting a workload to its relative weight share of resources is known as Hard Capping. This setting is typically used to restrict Non-Production (E.g. Test) environments to their allocated resource and is also useful for cost control (E.g. Outsourcers), knowing that the LPAR will never consume more than its allocated relative weight allowance.

Hard Capping behaviour changes dependent on the use of the HiperDispatch setting. When HiperDispatch is not chosen, capping is performed at the Logical CP level, where the goal is for each logical CP to receive its relative CP share, based on the relative weight setting. When HiperDispatch is active, vertical as opposed to horizontal CPU management applies. So, a High categorization dictates capping at 100% of the logical CP, whereas a Medium or Low setting allows for resource sharing based on a relative weight per CP basis.

The Intelligent Resource Director (IRD) function provides more advanced relative weight management, automating management of CPU resources and a subset of I/O resources. Workload Manager (WLM) manages physical CPU resource across z/OS images within an LPAR cluster based on service class goals. IRD is implemented as a collaboration between the WLM function and the PR/SM Logical Partitioning (LPAR) hypervisor:

  • Logical CP Management: dynamically allocating logical processors (E.g. Vary On-Line/Off-Line)
  • Relative Weight Management: dynamically redistributing CPU resource as per LPAR weights
  • CHPID Management: dynamically assigning logical channel paths between eligible LPARs

IRD optimizes resource usage, enabling WLM to deliver workload goals.

The use of relative weight in association with Hard Capping and/or IRD/WLM granularity has become somewhat limited for most Mainframe installations with the advent of Sub-Capacity pricing (I.E. MLC via SCRT/R4HA). Primarily because there is no direct correlation to manage CPU resource at a meaningful level, namely the MSU (vis-à-vis CPU MIPS) metric.

Defined Capacity (DC) provides Sub-Capacity CEC pricing by allowing definition of LPAR capacity with a granularity of 1 MSU. In conjunction with the WLM function, the Defined Capacity of an LPAR dictates whether Soft Capping is invoked or not. At this juncture, we should consider how and when WLM measures CPU resource usage and if and when Soft Capping is activated and deactivated:

WLM is responsible for taking MSU utilization samples for each LPAR in 10-second intervals. Every 5 minutes, WLM documents the highest observed MSU sample value from the 10-second interval samples. This process always keeps track of the past 48 updates taken for each LPAR. When the 49th reading is taken, the 1st reading is deleted, and so on. These 48 values continually represent a total of 5 minutes * 48 readings = 240 minutes or the past 4 hours (I.E. R4HA). WLM stores the average of these 48 values in the WLM control block RCT.RCTLACS. Each time RMF (or BMC CMF equivalent) creates a Type 70 record, the SMF70LAC field represents the average of all 48 MSU values for the respective LPAR a particular Type 70 record represents. Hence, we have the “Rolling 4 Hour Average”. RMF gets the value populated in SMF70LAC from RCT.RCTLACS at the time the record is created.

SCRT also uses the Type 70 field SMF70WLA to ensure that the values recorded in SMF70LAC do not exceed the maximum available MSU capacity assigned to an LPAR. If this ever happens (due to Soft Capping or otherwise) SCRT uses the value in SMF70WLA instead of SMF70LAC. Values in SMF70WLA represent the total capacity available to the LPAR.

We should also consider the two possibilities for MLC software payment (I.E. SCRT) based upon MSU resource usage. Quite simply, the MSU value passed for SCRT invoice consideration is the R4HA or the Defined Capacity, whichever is the lowest. Put another way; if the R4HA exceeds Defined Capacity, Soft Capping applies to the LPAR.

The primary disadvantage of Soft Capping is that the Defined Capacity setting is somewhat static; it is manually defined once, maybe several times a day for workloads with distinct characteristics (E.g. On-Line, Batch, et al), but dynamic DC management based upon inter-related LPAR behaviour is at best, evolving. The primary considerations for Soft Capping are:

  • An LPAR can only be managed via Soft Capping or Hard Capping; not both
  • DC rules only applies to General Purpose CP’s (Hard Capping for Specialty Engines is allowed)
  • An LPAR must be defined with shared CP’s (dedicated CP’s not allowed)
  • All LPAR Sub-Capacity eligible products have the same MSU capacity (I.E. DC)

Soft Capping is relatively simple to implement and typically generates MLC software costs savings, with minimal impact.

Group Capacity Limit (GCL) provides an extension to the Defined Capacity (DC) Soft Capping function. GCL allows an MSU limit for total usage of all group LPARs, with a granularity of 1 MSU. The primary considerations for GCL are:

  • Works with DC LPAR capacity settings
  • Target share does not exceed DC
  • Works with IRD
  • Multiple CEC groups allowed; but an LPAR may only be defined to one group
    An LPAR must be defined with shared CP’s, with WAIT COMPLETION = NO specification

It is possible to combine IRD weight management with the GCL function. Based on installation policy, IRD can modify the relative weight setting to redistribute capacity resource within an LPAR cluster.

However, IRD weight management is suspended when GCL is in effect, because LPAR resource entitlement within a capacity group can be (I.E. Pre zxC12) derived from the current weight. Hence the LPAR might get allocated an unacceptable low weight setting, generating a low GCL entitlement.

GCL also allows for MSU to be shared between LPARs in a group, where one LPAR would be a donator and another would be a receiver. Therefore the customer classifies their LPARs accordingly and when a high-priority LPAR requires additional MSU resource, it will be allocated from a lower priority LPAR, if available. This provides a modicum of flexibility, but by definition, peak workloads are not predictable and typically require a significantly higher amount of MSU for a short time period. Typically this requirement will not be satisfied with the GCL function.

Soft Capping techniques, either at the individual (DC) or group (GCL) level deliver cost saving benefit, but a fine granularity of management is required to balance cost saving versus associated performance considerations. The primary challenges associated with Soft Capping are its interactions with workload characteristics and an inability to dynamically manage MSU allocation, in-line with the R4HA. Put another way, the R4HA is derived from 48*5 Minute samples, whereas DC and GCL settings are typically defined on an infrequent (E.g. Monthly or longer) basis.

As z/OS evolves, further in-built function is available to manage MSU capacity. zSeries Capacity Provisioning Manager (CPM) is designed to simplify the management of temporary capacity, defined capacity and group capacity. The scope of z/OS Capacity Provisioning is to address capacity requirements for relatively short term workload fluctuations for which On/Off Capacity on Demand or Soft Capping changes are applicable. CPM is not a replacement for the customer derived Capacity Management process. Capacity Provisioning should not be used for providing additional capacity to systems that have Hard Capping (initial capping or absolute capping) defined.

With the introduction of z/OS 2.1, CPM functionality incorporates Soft Capping support via the DC and GCL functions. CPM functions from a set of installation defined policies and parameters, where the CPM server receives three types of input:

  • Domain Configuration: defines the CPCs and z/OS systems to be managed
  • Policy: contains the information as to which work is eligible, for which conditions and during which timeframes and capacity increases for constrained workloads
  • Parameter: contains environment descriptors (E.g. UNIX Environment, Installation Options, et al)

From a customer viewpoint, policy definition allows them to define the provision of CPU resource:

  • Date & Time: When capacity provisioning is allowed
  • Workload: Which service class qualifies for provisioning?
  • CPU Resource: How much additional MSU capacity can be allocated?

CPM provides more function when compared with Defined Capacity and Group Capacity Limit Soft Capping techniques. Therefore allowing for time schedules to be defined, workloads to be categorized and MSU resource to be allocated in a dynamic and granular manner.

A modicum of complexity exists when considering the arguably most important factor for CPM policy definition, namely the Performance Index (PI):

  • Activation: PI of service class periods must exceed the activation threshold for a specified duration, before the work is considered as eligible.
  • Deactivation: PI of service class periods must fall below the deactivation threshold for a specified duration, before the work is considered as ineligible.
  • Null: If no workload condition is specified a scheduled activation/deactivation is performed; with full capacity as specified in the rule scope, unconditionally at the start and end times of the time condition.

For workload based provisioning it is a necessary condition that the current system Performance Index exceeds the specified customer policy PI metric. One must draw one’s own conclusions regarding PI criteria settings, but to date, they’re largely based on arguably complex mathematical formulae, which perhaps is not practicable, especially from a simple management viewpoint.

With the requisite hardware (I.E. zxC12+) and Operating System levels (I.E. z/OS 1.13+), CPM provides extra functionality for the customer to implement granular Soft Capping techniques to balance cost and performance. When compared with Defined Capacity and Group Capacity Limit techniques, CPM delivers increased granularity for managing capacity dynamically, based on customer derived policies, recognizing time slots, workloads and MSU resource increases accordingly.

From a big picture viewpoint, without doubt, we must recognize the fundamental role that WLM plays in Soft Capping. Quite simply, the 48*5 Minute MSU resource samples dictate whether a workload will be eligible for Soft Capping or not and from a cumulative viewpoint, these MSU samples dictate the R4HA metric. Based on this observation, efficient and functional Soft Capping must be workload based (I.E. WLM Service Class), be dynamic and operational on a 24*7 basis, because workload peaks are never predictable, while balancing MSU resource accordingly. Of course, simplicity of implementation and management, supplemented by meaningful reporting is mandatory.

Once again, observing the 48*5 Minute MSU resource samples from a R4HA viewpoint, if a workload was to increase MSU usage by an average of 50% for 1 Hour (I.E. 12 Samples), and decrease MSU usage by an average of 20% for 2.5 Hours (I.E. 30 Samples), from an average viewpoint, the R4HA has remained static. Therefore an optimum Soft Capping technique needs to recognize WLM service class requirements, reacting in a timely manner, increasing and decreasing MSU usage, to safeguard workload performance for Time Critical workloads, while optimizing SCRT MLC cost.

zDynaCap delivers automated capacity balancing within CPCs, Capacity Groups or Groups of LPARs. Central to zDynaCap are the predefined balancing policies. Within these balancing policies, users define their MSU ranges of Groups and LPARs and also the priorities of the associated LPAR Workload. zDynaCap continually monitors overall usage and compares this to the available capacity and the user defined MSU balancing policies. For example, should a high priority workload on one LPAR not get enough capacity, while a low priority workload on another within the group gets too much capacity, available MSU capacity is distributed according to customer derived balancing policies. Only if there is no leftover capacity to be rescheduled within the defined Group, and if the high or medium priority workload will be slowed down, will zDynaCap add MSU.

With zDynaCap Capacity Balancing, available MSU capacity is balanced within LPAR groups, safeguarding that during peak time the mission critical workload is processed as per business expectations (E.g. SLA/KPI) for the lowest possible MLC cost.

In conclusion, given the significance of IBM MLC software (E.g. z/OS, CICS, DB2, IMS, WebSphere MQ, et al) costs, arguably every Mainframe environment should deploy a capping technique for cost optimization. Hard Capping might work for some, but in all likelihood, Soft Capping is the primary choice for most Mainframe environments. For sure, IBM have delivered several Soft Capping techniques, with varying levels of function and granularity, namely Defined Capacity, Group Capacity Limit (GCL) and the zSeries Capacity Provisioning Manager (CPM). It was forever thus and the ISV community exists because they specialize, architect and deliver specialized solutions and zDynaCap is such a solution, recognizing the fundamental rules of IBM Mainframe Soft Capping, namely the underlying WLM and R4HA foundation.

IBM Mainframe: Workload License Charges (WLC) Pros & Cons

It is estimated that less than half of eligible IBM Mainframe customers deploy the VWLC pricing mechanism, which in theory, is the lowest cost IBM software pricing metric.  Why?  In the first instance, let’s review the terminology…

Workload License Charges (WLC) is a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The fundamental ethos of WLC is a “pay for what you use” mechanism, allowing a lower cost of incremental growth and the potential to manage software cost by managing associated workload utilization.

WLC charges are either VWLC (Variable) or FWLC (Flat).  Not all IBM Mainframe software products are classified as VWLC eligible, but the major software is, including z/OS, CICS, DB2, IMS and WebSphere MQ, where these products are the most expensive, per MSU.  What IBM consider to be legacy products, are classified as FWLC.  More recently a modification to the VWLC mechanism was announced, namely AWLC (Advanced), strictly aligned with the latest generation of zSeries servers, namely zEC12, z196 and z114.  For the smaller user, the EWLC (Entry) mechanism applies, where AEWLC would apply for the z114 server.  There is a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms:

Band MSU Range
Base 0-3 MSU
Level 0 4-45 MSU
Level 1 46-175 MSU
Level 2 176-315 MSU
Level 3 316-575 MSU
Level 4 576-875 MSU
Level 5 876-1315 MSU
Level 6 1316-1975 MSU
Level 7 1976+ MSU

Put simply, as the MSU band increases, the related cost per MSU decreases.

IBM Mainframe users can further implement cost control by specifying how much MSU resource they use by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC) allows the sizing of an LPAR in MSU, and so said LPAR will not exceed this MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, and so allowing MSU resource to be shared accordingly.  A potential downside of GCL is that is one LPAR of the group can consume all available MSU due to a rogue transaction (E.g. loop).

Sub-Capacity software charges are based upon LPAR hardware utilization, where the product runs, measured in hourly intervals.  To smooth out isolated usage peaks, a Rolling 4-Hour Average (R4HA) is calculated for each LPAR combination, and so software charges are based on the Monthly R4HA peak of appropriate LPAR combinations (I.E. where the software product runs) and not based on individual product measurement.

Once a Defined Capacity LPAR is deployed, this informs WLM (Workload Manager) to monitor the R4HA utilization of that LPAR.  If the LPAR R4HA utilization is less than the Defined Capacity, nothing happens.  If the LPAR R4HA utilization exceeds the Defined Capacity, then WLM signals to PR/SM and requests that Soft Capping be initiated, constraining the LPAR workload to the Defined Capacity level.

If a user chooses a Sub-Capacity WLC pricing mechanism, they will be required by IBM to submit a monthly Sub-Capacity Reporting Tool (SCRT) report.  Monthly WLC invoices are based upon hourly utilization metrics of LPAR hardware utilization, where the software product executes.  The cumulative R4HA and bottom line WLC billing metric is calculated for each product and associated LPAR group and not based on individual product measurement.

Bottom Line: From a Soft Capping viewpoint, the customer only pays for WLC software based upon the Defined Capacity (DC) or Rolling 4-Hour Average (R4HA), whichever is the lowest.  So whether a customer uses Soft Capping or not, in all likelihood, there will be occasions when their workload R4HA is lower than their zSeries server MSU capacity.

So, at first glance, VWLC seems to provide a compelling pricing metric, based upon Sub-Capacity and a pay for what you use ethos, and so why wouldn’t an IBM Mainframe user deploy this pricing metric?

The IBM Planning for Sub-Capacity Pricing (SA22-7999-0n) manual states “For IBM System z10 BC and System z9 BC environments, and z890 servers, EWLC pricing is the default for z/OS systems, and Sub-Capacity pricing is always the best option.  For IBM zEnterprise 114, environments, AEWLC pricing is the default for z/OS systems, and Sub-Capacity pricing is always the best option.  For IBM zEnterprise 196, System z10 EC and System z9 EC environments, and other zSeries servers, Sub-Capacity pricing is cost-effective for many, but not all, customers.  You might even find that Sub-Capacity pricing is cost effective for some of your CPCs, but not others (although if you want pricing aggregation, you must always use the same pricing for all the CPCs in the same sysplex)”.

Conclusion: For all small Mainframe users qualifying for the EWLC (AEWLC) pricing metric, arguably this pricing mechanism is mandatory.  For the majority of larger Mainframe users, the same applies, although a granularity of adoption might be required.  IBM also have a disclaimer “Once you decide to use Sub-Capacity pricing for a specific operating system family, you cannot return to the alternative pricing methods for that operating system family on that CPC.  For example, once you select WLC you may not switch back to PSLC without prior IBM approval”.  However, the requisite contractual exit clause option does exist; the customer can switch back to the PSLC pricing metric.

Some IBM Mainframe users might object to a notion of Soft Capping, relying upon their tried and tested methodology of LPAR management via the number of CPs allocated and associated PR/SM Weight.  This is seemingly a valid notion and requirement, prioritizing performance ahead of cost optimization.

Conclusion: As previously indicated, with VWLC, SCRT invoices are generated upon a premise of the customer only pays for WLC software based upon the Defined Capacity (DC) or Rolling 4-Hour Average (R4HA), whichever is the lowest.  So the VWLC pricing mechanism should deliver a granularity of cost savings, typically higher for a Soft Capping environment.

Some IBM Mainframe users might just believe that nothing can match their Parallel Sysplex Licensing Charge (PSLC) mechanism, first available in the late 1990’s, which might be attributable to other 3rd party ISV’s who cannot and will not allow for their software to be priced on a Sub-Capacity basis.  In reality, adopting the VWLC pricing mechanism delivers ~5% cost savings when compared with PSLC, as indicated by the IBM Planning for Sub-Capacity Pricing Manual (SA22-7999-0n) and related Sub-Capacity Planning Tool (SCPT).

Conclusion: Adopting Sub-Capacity based pricing metrics can only be a good thing.  If your 3rd party ISV supplier doesn’t recognise Sub-Capacity pricing, whether MIPS or MSU based, perhaps you should consider your relationship with them.  Regardless, the z10 server was the last IBM Mainframe to incorporate the “Technology Dividend” solely based on faster CPU chips.  The lower cost WLC pricing metric is now only available with the AWLC and related (E.g. AEWLC) pricing metrics, as per the z196, z114 and zEC12 servers.

Some customers might state that there is a lack of function or granularity of policy definition for IBM supplied Soft Capping (E.g. DC, GCL) or Workload Management (WLM) techniques.  To some extent this is a valid argument, but wasn’t it forever thus with IBM function?  Sub-Capacity implementation is possible via IBM, as is Workload Management (WLM), Soft Capping or not, but should the customer require extra functionality, 3rd party software solutions are available.

The zDynaCap software solution from zIT Consulting delivers a “Capacity Balancing” mechanism, integrating with R4HA and WLM methodologies, but constantly monitoring MSU usage to determine whether CPU resource can be reallocated to Mission & Time Critical workloads, based upon granular customer policies.  The only guarantee in a multiple LPAR environment, for a Mission & Time Critical LPAR to receive all available MSU resource, Soft Capping or not, is to inactivate all other LPARs!  Clearly this is not an acceptable policy for any installation, and so a best endeavours policy applies for PR/SM DC, GCL and Weight settings.

Conclusion: z/OS workloads change constantly, whether the time of day (E.g. On-Line, Batch) or period of the year (E.g. Weekly, Monthly, Quarterly, Yearly) or just by customer demand (E.g. 24 Hour Transaction Application).  Therefore a dynamic MSU management solution such as zDynaCap is arguably mandatory, implementing the optimum MSU management policy, whether for purely performance reasons, safeguarding the Mission & Time Critical workload isn’t impacted by lower priority workloads, or for cost reasons, optimizing MSU usage for the best possible monthly WLC cost.

In conclusion, not considering and arguably not implementing z/OS VWLC related pricing mechanisms is impractical, because:

  • The VWLC and AWLC related pricing metrics deliver the lowest cost per MSU for eligible z/OS software
  • When compared with PSLC, VWLC related pricing mechanisms deliver conservative ~5% cost savings
  • A pay for what you use and therefore Sub-Capacity pricing mechanism, not the installed MSU capacity
  • If extra MSU policy management granularity is required, consider 3rd party software such as zDynaCap

Software cost savings are not just for the privileged; they’re for everyone!

IBM Mainframe Capacity Planning & Software Cost Control Interaction?

The cost of IBM Mainframe software is an extensive subject matter that is multi-faceted and can generate much discussion. The importance of optimizing Mainframe software costs is without doubt, as it is the most significant Mainframe TCO component, having increased from ~25-50%+ of overall expenditure in the last decade or so. Conversely Mainframe server hardware costs have largely stabilized at ~15-25% of TCO in the same time period. However, Mainframe Capacity Planning activities have evolved over the last several decades or so, where hardware costs were the primary concern and the number of IBM Mainframe software pricing mechanisms was limited. Of course, in the last decade or so, IBM Mainframe software pricing mechanisms have evolved, with a plethora of acronyms, ESSO, ELA, IPLA, OIO, PSLC, WLC, VWLC, AWLC, IWP, naming but a few!

Can each and every IBM Mainframe user clearly articulate their Mainframe Capacity Planning and Software Cost Control policies, and which person in their organization performs these very important roles? Put another way, not forgetting Software Asset Management (SAM), should there be a Software Cost Control specialist for IBM Mainframe Data Centres…

If we consider the traditional Mainframe Capacity Planning role, put very simply, this process typically produces a 3-5 year rolling plan, based upon historical data and future capacity requirements. These requirements can then be modelled with the underlying hardware (E.g. z10, z114/z196, zEC12) server, identifying resource requirements accordingly, namely number of General Processors (GPs), Specialty Engines (E.g. zIIP, zAAP, IFL), Memory, Channels, et al. Previously, up until ~2005, customer requirements would be articulated to IBM, cross-referenced with LSPR (Large System Performance Reference) and an optimum hardware configuration derived. Since ~2005, IBM made their zPCR (Processor Capacity Reference) tool Generally Available, allowing the Mainframe customer to “more accurately” capacity plan for IBM zSeries servers.

Other enhancements to more accurately determine the ideal zSeries server include sizing based on actual customer usage data generated by the CPU MF facility introduced with the z10 server. CPU MF delivers a refinement when compared with LSPR, refining the zPCR process with real life customer usage data, compared to the standard simulated LSPR workloads.

In summary, the Mainframe Capacity Planning process has evolved to include new tools and data to refine the process, but primarily, the process remains the same, size the hardware based upon historical data and future business requirements. However, what about Mainframe software usage and therefore cost interaction?

Each and every IBM Mainframe user relies heavily on the IBM Operating System (I.E. z/OS, z/VM, z/VSE, zLinux, et al) and primary subsystems (I.E. CICS, DB2, MQ, IMS, et al). Some Mainframe users might deploy alternative database and transaction processing (TP) solutions, but a significant amount of Mainframe software cost is for IBM software products. In the late-1990’s, IBM introduced their PSLC (Parallel Sysplex License Charges), which offered lower aggregate (MSU) pricing for major IBM software products, based upon an eligible configuration (E.g. Resource Sharing). This pricing mechanism had no impact on software cost control, in fact quite the opposite; it was a significant cost benefit to implement PSLC!

In 2000 IBM announced Workload License Charges (WLC), which allowed users to pay for software based upon the workload size, as opposed to the capacity of the machine; thus the first signs of sub-capacity pricing. In 2001, the ability to deploy IBM eligible software on a “pay for what you use” basis was possible, as per the Variable Workload License Charge (VWLC) mechanism. Put very simply, a Rolling 4 Hour Average (R4HA) MSU metric applies for eligible IBM software products, where software is charged based upon the peak MSU usage during a calendar month. The Mainframe user pays for VWLC software based upon the R4HA or Defined Capacity (Sub-Capacity vis-à-vis Soft Capping), whichever is lowest.

From this point forward, and for the avoidance of doubt, for the last 10 years or so, there has been a mandatory requirement to consider the impact of IBM WLC software costs, when performing the Mainframe Capacity Planning activity. One must draw one’s own conclusions as to whether each and every Mainframe user has the skills to know the intricacies of the various software (E.g. IPLA, OIO, PSLC, WLC, et al) pricing models, when upgrading their zSeries server.

With the IBM Mainframe Charter in 2003, IBM stated that they would deliver a ~10% technology dividend benefit, loosely meaning that for each new Mainframe technology model (I.E. z9, z10), a lower MSU rating of 10% applied for the for the same system capacity level, when compared with the previous technology. Put another way, a potential ~10% software cost reduction for executing the same workload on a newer technology IBM Mainframe; so encouraging users to upgrade. However, the ~10% software cost reduction is subjective, because a higher installed MSU capacity dictates lower per MSU software costs…

With the introduction of the z196 and z114 Mainframe servers the technology dividend was delivered in the form of a new software license charge, AWLC (Advanced Workload License Charges), where lower software costs only applied if this new pricing model was deployed. A similar story for the zEC12 server, the AWLC pricing model is required to benefit from the lower software costs! If these software pricing evolutions were not enough, in 2011 IBM introduced the Integrated Workload Pricing (IWP) mechanism, offering potential for lower software pricing based upon workload type, namely a WebSphere eligible workload. Finally, and as previously alluded to, as MSU capacity increases, the related cost per MSU for software decreases, so there are many IBM software pricing mechanisms to consider when adding Mainframe CPU capacity. So once again, who is the IBM Mainframe Software Cost Control specialist in your organization?

For sure, each and every IBM Mainframe user will engage their IBM account team as and when they plan a Mainframe upgrade process, but how much “customer thinking is outsourced to IBM” during this process? Wouldn’t it be good if there was an internal “checks & balances” or due diligence activity that could verify and refine the Mainframe Capacity Plan with IBM software cost control intelligence?

Having travelled and worked in Europe for 20+ years, I know my peers, colleagues and friends that I have encountered can concur with my next observation. The English and Americans might come up with a good idea and perhaps product, the French are most likely to test that product to destruction and identify numerous new features, while the Germans will write the ultimate technical manual…

zCost Management are a French company that specializes in cost optimization services and solutions for the IBM Mainframe. From an IBM Mainframe Capacity Planning & Software Cost Control Interaction viewpoint, they have developed their CCP-Tool (Capacity and Cost Planning) software solution. This software product bridges the gap between Mainframe Capacity Planning for hardware and the impact on associated IBM software (E.g. WLC, IPLA, et al) costs.

CCP-Tool facilitates medium-term (E.g. 3-5 year) Mainframe Capacity Planning by controlling Monthly License Charges (MLC) evolution, generating cost control policies, optimizing zSeries (E.g. PR/SM) resource sharing, delivering financial management via IBM Mainframe software cost control activity. CCP-Tool integrates with existing data and activities, using SMF Type 70 & 89 records, defining events (I.E. Capacity Requirements, Workload Moves) in the plan, simulating many options, delivering your final capacity plan and periodically (I.E. Quarterly) reviewing and revising the plan. Most importantly, CCP-Tool deploys many algorithms and techniques aligned to IBM software pricing mechanisms, especially WLC and R4HA related.

Therefore CCP-Tool delivers a financial management framework via a medium-term Capacity Plan with associated software cost control and zSeries (E.g. PR/SM) resource policies. This enables a balanced viewpoint of future Data Centre cost configurations from both a hardware and related IBM Mainframe software viewpoint. Moreover, for those IBM Mainframe users that don’t necessarily have the skills to perform this level of Mainframe cost control, CCP-Tool delivers a low cost solution to empower the Mainframe customer to engage IBM on an equal footing, at least from a reporting viewpoint. Similarly, for those Mainframe users with good IBM Mainframe software cost control skills, CCP-Tool offers a “checks & balances” viewpoint, delivering that all important due diligence sanity check! Quite simply, CCP-Tool simplifies the process of reconciling the optimal configuration both from an IBM Mainframe hardware and related software viewpoint.

Without doubt, if a Mainframe user still deploys a hardware centric viewpoint of the capacity planning activity, without considering the numerous intricacies of IBM Mainframe software pricing, in most cases, this could be a significant cost oversight. Put very simply, a low-end IBM Mainframe user of ~150 MSU (1,000 MIPS) might spend ~£1,000,000 per annum, just for a minimal configuration of z/OS, CICS, COBOL and DB2 software, so one must draw one’s own conclusions regarding the potential cost savings, when deploying the optimal zSeries hardware and associated IBM software configuration. I paraphrase Oscar Wilde:

“The definition of a cynic is someone that knows the price of everything, and the value of nothing!”

So, let’s reprise. You have performed your Mainframe Capacity Planning activity, considered historical SMF data for CPU usage, maybe including the R4HA metric, factored in additional new and growth business requirements, refined the capacity plan by using the zPCR tool, perhaps with data input from CPU MF and you now have identified your optimum zSeries Mainframe server?

Maybe you should think again, because the numerous IBM MLC software pricing mechanisms could impact your tried and tested Mainframe CPU hardware planning process. Firstly, for MLC software, the unit cost per MSU reduces, as the installed MSU capacity increases. In simple terms, this encourages the use of “large container” processing entities, LPARs and CPCs. Other AWLC and IWP related considerations further encourages the use of major subsystems (E.g. CICS, DB2, WebSphere, IMS) in larger MSU capacity LPARs and CPCs to benefit from the lowest unit cost per MSU. Additionally, do you really need to run all software on all processing entities? For example, programming languages (E.g. COBOL, PL/I, HLASM, et al) are not necessarily required in all environments (E.g. Test, Development, Production, et al). It is not uncommon for compile and link-edit functions to be processed in Development environments only, while only run-time libraries are required for Production. These “what if” scenarios generated by the numerous IBM MLC software pricing mechanisms must be considered, ideally by an internal resource, with the requisite skills and experience.

Today, who is performing this Mainframe Software Cost Control in your organization? Is it an internal resource with the requisite skills, an independent 3rd party, IBM or nobody? One must draw one’s own conclusions as to whether any of these parties who could perform this vital activity has a vested interest or not, and thus a potential conflict of interests…