IBM Mainframe Capacity Planning & Software Cost Control Interaction?

The cost of IBM Mainframe software is an extensive subject matter that is multi-faceted and can generate much discussion. The importance of optimizing Mainframe software costs is without doubt, as it is the most significant Mainframe TCO component, having increased from ~25-50%+ of overall expenditure in the last decade or so. Conversely Mainframe server hardware costs have largely stabilized at ~15-25% of TCO in the same time period. However, Mainframe Capacity Planning activities have evolved over the last several decades or so, where hardware costs were the primary concern and the number of IBM Mainframe software pricing mechanisms was limited. Of course, in the last decade or so, IBM Mainframe software pricing mechanisms have evolved, with a plethora of acronyms, ESSO, ELA, IPLA, OIO, PSLC, WLC, VWLC, AWLC, IWP, naming but a few!

Can each and every IBM Mainframe user clearly articulate their Mainframe Capacity Planning and Software Cost Control policies, and which person in their organization performs these very important roles? Put another way, not forgetting Software Asset Management (SAM), should there be a Software Cost Control specialist for IBM Mainframe Data Centres…

If we consider the traditional Mainframe Capacity Planning role, put very simply, this process typically produces a 3-5 year rolling plan, based upon historical data and future capacity requirements. These requirements can then be modelled with the underlying hardware (E.g. z10, z114/z196, zEC12) server, identifying resource requirements accordingly, namely number of General Processors (GPs), Specialty Engines (E.g. zIIP, zAAP, IFL), Memory, Channels, et al. Previously, up until ~2005, customer requirements would be articulated to IBM, cross-referenced with LSPR (Large System Performance Reference) and an optimum hardware configuration derived. Since ~2005, IBM made their zPCR (Processor Capacity Reference) tool Generally Available, allowing the Mainframe customer to “more accurately” capacity plan for IBM zSeries servers.

Other enhancements to more accurately determine the ideal zSeries server include sizing based on actual customer usage data generated by the CPU MF facility introduced with the z10 server. CPU MF delivers a refinement when compared with LSPR, refining the zPCR process with real life customer usage data, compared to the standard simulated LSPR workloads.

In summary, the Mainframe Capacity Planning process has evolved to include new tools and data to refine the process, but primarily, the process remains the same, size the hardware based upon historical data and future business requirements. However, what about Mainframe software usage and therefore cost interaction?

Each and every IBM Mainframe user relies heavily on the IBM Operating System (I.E. z/OS, z/VM, z/VSE, zLinux, et al) and primary subsystems (I.E. CICS, DB2, MQ, IMS, et al). Some Mainframe users might deploy alternative database and transaction processing (TP) solutions, but a significant amount of Mainframe software cost is for IBM software products. In the late-1990’s, IBM introduced their PSLC (Parallel Sysplex License Charges), which offered lower aggregate (MSU) pricing for major IBM software products, based upon an eligible configuration (E.g. Resource Sharing). This pricing mechanism had no impact on software cost control, in fact quite the opposite; it was a significant cost benefit to implement PSLC!

In 2000 IBM announced Workload License Charges (WLC), which allowed users to pay for software based upon the workload size, as opposed to the capacity of the machine; thus the first signs of sub-capacity pricing. In 2001, the ability to deploy IBM eligible software on a “pay for what you use” basis was possible, as per the Variable Workload License Charge (VWLC) mechanism. Put very simply, a Rolling 4 Hour Average (R4HA) MSU metric applies for eligible IBM software products, where software is charged based upon the peak MSU usage during a calendar month. The Mainframe user pays for VWLC software based upon the R4HA or Defined Capacity (Sub-Capacity vis-à-vis Soft Capping), whichever is lowest.

From this point forward, and for the avoidance of doubt, for the last 10 years or so, there has been a mandatory requirement to consider the impact of IBM WLC software costs, when performing the Mainframe Capacity Planning activity. One must draw one’s own conclusions as to whether each and every Mainframe user has the skills to know the intricacies of the various software (E.g. IPLA, OIO, PSLC, WLC, et al) pricing models, when upgrading their zSeries server.

With the IBM Mainframe Charter in 2003, IBM stated that they would deliver a ~10% technology dividend benefit, loosely meaning that for each new Mainframe technology model (I.E. z9, z10), a lower MSU rating of 10% applied for the for the same system capacity level, when compared with the previous technology. Put another way, a potential ~10% software cost reduction for executing the same workload on a newer technology IBM Mainframe; so encouraging users to upgrade. However, the ~10% software cost reduction is subjective, because a higher installed MSU capacity dictates lower per MSU software costs…

With the introduction of the z196 and z114 Mainframe servers the technology dividend was delivered in the form of a new software license charge, AWLC (Advanced Workload License Charges), where lower software costs only applied if this new pricing model was deployed. A similar story for the zEC12 server, the AWLC pricing model is required to benefit from the lower software costs! If these software pricing evolutions were not enough, in 2011 IBM introduced the Integrated Workload Pricing (IWP) mechanism, offering potential for lower software pricing based upon workload type, namely a WebSphere eligible workload. Finally, and as previously alluded to, as MSU capacity increases, the related cost per MSU for software decreases, so there are many IBM software pricing mechanisms to consider when adding Mainframe CPU capacity. So once again, who is the IBM Mainframe Software Cost Control specialist in your organization?

For sure, each and every IBM Mainframe user will engage their IBM account team as and when they plan a Mainframe upgrade process, but how much “customer thinking is outsourced to IBM” during this process? Wouldn’t it be good if there was an internal “checks & balances” or due diligence activity that could verify and refine the Mainframe Capacity Plan with IBM software cost control intelligence?

Having travelled and worked in Europe for 20+ years, I know my peers, colleagues and friends that I have encountered can concur with my next observation. The English and Americans might come up with a good idea and perhaps product, the French are most likely to test that product to destruction and identify numerous new features, while the Germans will write the ultimate technical manual…

zCost Management are a French company that specializes in cost optimization services and solutions for the IBM Mainframe. From an IBM Mainframe Capacity Planning & Software Cost Control Interaction viewpoint, they have developed their CCP-Tool (Capacity and Cost Planning) software solution. This software product bridges the gap between Mainframe Capacity Planning for hardware and the impact on associated IBM software (E.g. WLC, IPLA, et al) costs.

CCP-Tool facilitates medium-term (E.g. 3-5 year) Mainframe Capacity Planning by controlling Monthly License Charges (MLC) evolution, generating cost control policies, optimizing zSeries (E.g. PR/SM) resource sharing, delivering financial management via IBM Mainframe software cost control activity. CCP-Tool integrates with existing data and activities, using SMF Type 70 & 89 records, defining events (I.E. Capacity Requirements, Workload Moves) in the plan, simulating many options, delivering your final capacity plan and periodically (I.E. Quarterly) reviewing and revising the plan. Most importantly, CCP-Tool deploys many algorithms and techniques aligned to IBM software pricing mechanisms, especially WLC and R4HA related.

Therefore CCP-Tool delivers a financial management framework via a medium-term Capacity Plan with associated software cost control and zSeries (E.g. PR/SM) resource policies. This enables a balanced viewpoint of future Data Centre cost configurations from both a hardware and related IBM Mainframe software viewpoint. Moreover, for those IBM Mainframe users that don’t necessarily have the skills to perform this level of Mainframe cost control, CCP-Tool delivers a low cost solution to empower the Mainframe customer to engage IBM on an equal footing, at least from a reporting viewpoint. Similarly, for those Mainframe users with good IBM Mainframe software cost control skills, CCP-Tool offers a “checks & balances” viewpoint, delivering that all important due diligence sanity check! Quite simply, CCP-Tool simplifies the process of reconciling the optimal configuration both from an IBM Mainframe hardware and related software viewpoint.

Without doubt, if a Mainframe user still deploys a hardware centric viewpoint of the capacity planning activity, without considering the numerous intricacies of IBM Mainframe software pricing, in most cases, this could be a significant cost oversight. Put very simply, a low-end IBM Mainframe user of ~150 MSU (1,000 MIPS) might spend ~£1,000,000 per annum, just for a minimal configuration of z/OS, CICS, COBOL and DB2 software, so one must draw one’s own conclusions regarding the potential cost savings, when deploying the optimal zSeries hardware and associated IBM software configuration. I paraphrase Oscar Wilde:

“The definition of a cynic is someone that knows the price of everything, and the value of nothing!”

So, let’s reprise. You have performed your Mainframe Capacity Planning activity, considered historical SMF data for CPU usage, maybe including the R4HA metric, factored in additional new and growth business requirements, refined the capacity plan by using the zPCR tool, perhaps with data input from CPU MF and you now have identified your optimum zSeries Mainframe server?

Maybe you should think again, because the numerous IBM MLC software pricing mechanisms could impact your tried and tested Mainframe CPU hardware planning process. Firstly, for MLC software, the unit cost per MSU reduces, as the installed MSU capacity increases. In simple terms, this encourages the use of “large container” processing entities, LPARs and CPCs. Other AWLC and IWP related considerations further encourages the use of major subsystems (E.g. CICS, DB2, WebSphere, IMS) in larger MSU capacity LPARs and CPCs to benefit from the lowest unit cost per MSU. Additionally, do you really need to run all software on all processing entities? For example, programming languages (E.g. COBOL, PL/I, HLASM, et al) are not necessarily required in all environments (E.g. Test, Development, Production, et al). It is not uncommon for compile and link-edit functions to be processed in Development environments only, while only run-time libraries are required for Production. These “what if” scenarios generated by the numerous IBM MLC software pricing mechanisms must be considered, ideally by an internal resource, with the requisite skills and experience.

Today, who is performing this Mainframe Software Cost Control in your organization? Is it an internal resource with the requisite skills, an independent 3rd party, IBM or nobody? One must draw one’s own conclusions as to whether any of these parties who could perform this vital activity has a vested interest or not, and thus a potential conflict of interests…

What is my RACF technical policy? Could it be NIST DISA STIG based?

In the late 1980’s I was lucky enough to work at a Mainframe site that was performing early testing for MVS/ESA and DFP Version 3, namely DFSMS. So what you say! The main ethos of DFSMS was in fact System Managed Storage and the ability to define policies to manage data, and the system (DFSMS) would implement these policies. Up until this point, there was no easy way of controlling data set allocation and managing storage space.

Conversely, the three mainstream Mainframe security subsystems, in no particular and so alphabetical order, ACF2, RACF (Security Server) and Top Secret have always had the ability to define a security policy and for said policy to be processed as and when the associated resource was accessed. So why is it that so many security risk registers are full of “things to do” from a security viewpoint? Where did it all go wrong?

In the late 1980’s, Guide and SHARE, in Europe anyway, were separate entities, and these organizations had some influence with IBM regarding direction for various IBM Mainframe technologies. Not much has changed as of today, but the organizations have merged and for Europe, now we have GSE. From a DFSMS viewpoint, there was a significant amount of user input regarding how DFSMS might be shaped, and the “System Managed Storage” ethos. I wonder whether such user input, or indeed focussed collaboration from Mainframe security gurus might help with the RACF or Mainframe security technical policy challenge?

Having worked with multiple IT security focussed organizations (E.g. NIST, DoD) over the last few decades, Vanguard Integrity Professionals has been actively involved in creating and evolving NIST DISA STIG Checklists. These checklists (currently 300+ checks and growing steadily) provide a comprehensive grounding for z/OS RACF policy checking, and seemingly are gaining momentum in being accepted as a good starting point to assist organizations define and monitor their z/OS RACF (ACF2 & Top Secret in the near future) policy.

These DISA STIG checklists contain step-by-step instructions for customer usage to ensure secure, efficient, and cost-effective information security that is fully-compliant with recognized security and therefore Mainframe security standards. Being fully compliant with DoD DISA STIG for IBM z/OS Mainframes, these checklists provide organizations with the necessary procedures for conducting a Security Readiness Review (SRR) prior to, or as part of, a formal security audit.

To increase automation and thus reduce cost, Vanguard has optimized the DISA STIG checking process with their Configuration Manager solution. Configuration Manager can perform Intrusion Detection DISA STIG checks and report findings in just a few hours instead of the hundreds or thousands of hours it may take using standard methods. Potentially, Configuration Manager enables organizations to easily evolve from continuous monitoring to periodic compliance reporting.

Maintaining tight control over the security audit and compliance process is a critical imperative for today’s enterprises. To comply, enterprises must show that they have implemented procedures to prevent unauthorized users from accessing corporate and personal data. Even if enterprises have the means to efficiently conduct audits, they often lack the tools necessary to prevent policy and compliance violations from reoccurring. As a result, security vulnerabilities remain a constant threat, exposing companies to potential sanctions and erode the confidence of investors and customers.

As a result, the process of meeting compliance standards such as those found in the Combined Code issued by the London Stock Exchange (LSE) and the Turnbull Guidance (the Sarbanes-Oxley equivalent for publicly traded companies in the UK), the Data Protection Act 1998 (and, for the public sector, the Freedom of Information Act 2000), the regulations promulgated by the Financial Services Authority (FSA) (the FSA has oversight over the various entities that make up the financial services industry), standards set by Basel II, the Privacy and Electronic Communications Regulations of 2003, the HMG (UK Government) Security Policy Framework, the Payment Card Industry Data Security Standard (PCI DSS) and various UK criminal and civil laws, represents one of IT’s most critical investments.

As a consequence, managing security in the Mainframe environment is becoming an increasingly difficult task as the list of challenges grows longer every day. Even the most experienced Security Administrators can labour under the workload as security systems increase in size and networks grow in density.

So where does the Mainframe Security Administrator start to make sense of how to achieve security compliance for their particular business?

Although there are many security and compliance regulatory requirements with supporting policy frameworks, none of these high-level mandates actually drill-down to the technical level and thus provide RACF or equivalent Mainframe (ACF2, Top Secret) policy guidelines!

There are synergies between various global organizations that define security standards. This is certainly true for NIST and ISO/IEC. Page viii of the NIST SP-800-53 policy states:

NIST is also working with public and private sector entities to establish specific mappings and relationships between the security standards and guidelines developed by NIST and the International Organization for Standardization and International Electrotechnical Commission (ISO/IEC) 27001, Information Security Management System (ISMS).

Furthermore, the seemingly ubiquitous Annex A of ISO 27001 is also cross-referenced by this NIST SP-800-53 policy, and the various controls and monitoring points are cross-referenced, where largely, the requirements of the NIST standard are mapped in the ISO/SEC standard, and vice versa. Please refer to Appendix H of the NIST SP-800-53 policy, which cross-references NIST SP-800-53 with ISO/IEC 27001 (Annex A) controls.

Put simply, one must draw one’s own conclusions as to the robustness of NIST SP-800-53 vs. ISO/IEC 27001, but both security standards seem to have commonality as robust and acceptable security standards. So maybe Mainframe users all over the world can define and deploy a generic and robust baseline Mainframe technical security policy, vis-à-vis, the NIST DISA STIG checklists…

We must also recognize the IBM Health Checker for z/OS, which also has the ability to perform automated RACF policy checks. This facility includes some ready-to-go checks for standard system services, in conjunction with a facility that allows the user to define their own policy checking rules. Without doubt, the IBM Health Checker for z/OS is a worthy resource that should be leveraged from, but for RACF, if each Mainframe user defines their own policy checking rules, maybe there is the possibility for a significant duplication of effort. For the avoidance of doubt, although RACF resource naming standards might be unique to each and every Mainframe user, there is a commonality of ISV (E.g. ASG, BMC, CA, Compuware, IBM, SAS, et al) software subsystems and products they deploy. If only we could all benefit from previous lessons learned by “standing on the shoulders of giants”!

Perhaps the realm of opportunity exists. There are many prominent Mainframe security giants actively involved today, including the authors of ACF2, Vanguard Software, Consul (Tivoli zSecure), naming but a few. Is it possible that there could be one common standard that might be used as a technical policy template, based upon the ubiquitous 80/20 rule? So deploying this baseline would deliver 80% of the work required for 20% of the effort, where the unique Mainframe customer just customizes this policy as per the resource naming standards in their Mainframe Data Centre? Equally the user has the ability to contribute to this template, perhaps using a niche software product not commonly used that requires security policy checks, where said software product is deployed in maybe tens of Mainframe customers globally.

Vanguard clearly has put a lot of effort into evolving the DISA STIG resource for Mainframe, IBM also has their RACF Health Checker, but what about one overseeing independent organization, which could benefit from the experience of Mainframe security specialists, and moreover, real-life field experience from Mainframe users globally, implementing and refining these standards. Wasn’t that the essence and spirit of Guide and SHARE several decades ago, listening to Mainframe users and evolving Mainframe technology accordingly? Of course SHARE in The USA and Guide Share in Europe, IUGC and APUGC in the APAC region still perform this function admirably, but seemingly with the NIST DISA STIG resource, we already have a great baseline to leverage from.

What is the size and shape of this potential task, ideally to identify each z/OS software product that has specific interaction with the security subsystem (I.E. ACF2, RACF, Top Secret), typically via resource profiles? For a software z/OS product to be developed, the ISV will have interacted with IBM, initially via their PartnerWorld resource for product development, and eventually via the IBM Global Solutions Directory from a Marketing viewpoint. As of Q4 2012, the IBM Global Solutions Directory contains ~1,800 ISV’s with z/OS based software products.

However, recognizing there are already good security resource checklist templates in existence, vis-à-vis the solid foundation primarily provided by Vanguard via the DISA STIG checklists, the best organization to add to these DISA STIG checklists are the ISV’s themselves. The ISV has most knowledge about their product, having written the code and supporting documentation for security related control; so a modicum of effort from the ISV that has product specific security resource checking seems the best way forward.

In 2003 IBM launched a Mainframe Charter initiative, demonstrating their commitment to the Mainframe platform, where they adopted nine principles organized under the pillars of innovation, value, and community. Although this was an IBM initiative, wouldn’t it be great for the Mainframe ISV to proactively be part of this global Mainframe community, and assist their Mainframe customers simplify the activity of implementing and monitoring their Mainframe security technical policy? Not every ISV will have software products with specific security resource interaction, and therefore not every product in the ISV software portfolio will require a security checklist. The amount of work per software product to create a template might only be several hours, and so could the ISV produce these checklists as part of their day-to-day customer support activities?

Is it possible that globally, we can all participate and collaborate in a focussed and Mainframe security centric group, to define a technical policy template that will assist all Mainframe customers satisfy regulatory compliance mandates? No one of us is as good as all of us…