In the last few weeks I have encountered a couple of scenarios regarding Mainframe Server upgrades that have surprised me somewhat. The first was at the annual UK GSE conference during November 2013, where one of the largest UK Mainframe customers stated “we had problems regarding the capacity sizing of the IBM Mainframe server installed and our vendor was not very helpful in resolving this challenge with us”. The second was a European customer with 2 aging servers deployed, z9 BC, and they had asked their IBM Mainframe server vendor to provide an upgrade quotation. The server vendor duly replied, providing a like-for-like upgrade quotation, 2 new zBC12 servers, which at first glance seemed to be a valid configuration.
The one thing in common for these 2 vastly different Mainframe customers, the first very large, the second quite small, is that inadvertently they didn’t necessarily engage their respective vendors with the best set of questions or indeed terms of reference; while the vendors might say “ask me no questions and I’ll tell you no lies”…
For the 2nd scenario, I was asked to quickly review the configuration provided. My first observation was to consolidate both workloads on 1 server. The customer confirmed, there was no business reason to have 2 servers, it was historic, and there wasn’t even a SYSPLEX between the 2 z9 BC servers. The historic reason for the 2 z9 BC servers was the number of General Purpose (GP) engines supported. My second observation was that software licensing could be simplified and optimized with aggregated MSU and use of the AEWLC pricing model. So within ~1 hour, the customer had a significant potential to dramatically reduce costs.
We then suggested an analysis of their configuration with 2 software products, PerfTechPro for z/OS and zDynaCap. They already had the SMF data, so using the simulation abilities of these products, the customer quickly confirmed they could consolidate their workloads onto 1 zBC12, deploy zIIP processors to offload ~15% CPU usage from GP, and control MSU allocation with zDynaCap, saving another ~10% of CPU. For this customer, a small investment in software products reduced their server upgrade costs by ~€400,000 in year 1, with similar software savings, each and every year forever more. Although they didn’t have the skills in-house from a Mainframe Capacity Planning and software licensing viewpoint, this customer did eventually ask the right questions, and the rest as they say is history!
No man or indeed Mainframe customer is an island, so don’t be afraid to ask questions of your vendors or business partners!
From a cost viewpoint, both long-term (TCO) and day 1 (TCA), the requirement to deploy the optimum Mainframe server configuration from a capacity viewpoint cannot be under estimated, both in terms of hardware costs, but more importantly, associated software costs. It therefore follows that Mainframe Capacity Planning and Mainframe Software Licensing knowledge is imperative, but I’m not so sure there are that many Mainframe customers that have clearly defined job roles for such disciplines.
To generalize, always a dangerous thing, typically the larger Mainframe customer does have skilled and seasoned personnel for the Capacity Planning discipline, while the smaller Mainframe user might rely on a generic Systems Programmer or maybe even rely on their vendor to size their Mainframe servers. From a Mainframe software licensing viewpoint, there seems to be no general rule-of-thumb, as sometimes the smaller customer has significant knowledge and experience, whereas the larger Mainframe customer might not. Bottom Line: If the Mainframe customer doesn’t allocate the optimum capacity and associated software licensing metrics for their installation, problems will arise, probably for several years or more!
Are there any simple solutions or processes that can assist Mainframe customers?
The first and most simple observation is to engage your vendor and safeguard that they generate the final Mainframe server configuration that is used for Purchase Order activities. For sure, the customer will have their capacity plan and perhaps a “draft” server configuration, but even in these instances, the vendor should QA this data, refining the bill of materials (E.g. Hardware) accordingly. Therefore an iterative process occurs between customer and vendor, but the vendor is the one that confirms the agreed configuration is fit for purpose. In the unlikely event there are challenges in the future, the customer can work with their vendor to find a solution, as opposed to the example stated above where the vendor left their customer somewhat isolated.
The second observation is leverage from the tools and processes that are available, both generally available and internal for vendor pre sales personnel. Seemingly everybody likes something for nothing and so the ability to deploy “free” tools will appeal to most.
For Mainframe Capacity Planning, in addition to the standard in-house processes, whether bespoke (E.g. SAS, MXG, MICS based) or a packaged product, there are other additional tools available, primarily from IBM:
zPCR (Processor Capacity Reference) is a generally available Windows PC based tool, designed to provide capacity planning insight for IBM System z processors running various z/OS, z/VM, z/VSE, Linux, zAware, and CFCC workload environments on partitioned hardware. Capacity results are based on IBM’s most recently published LSPR data for z/OS. Capacity is presented relative to a user-selected Reference-CPU, which may be assigned any capacity scaling-factor and metric.
zCP3000 (Performance Analysis and Capacity Planning) is an IBM internal tool, Windows PC based, designed to for performance analysis and capacity planning simulations for IBM System z processors, running various SCP and workload environments. It can also be used to graphically analyse logically partitioned processors and DASD configurations. Input normally comes from the customer’s system logs via a separate tool (I.E. z/OS SMF via CP2KEXTR, VM Monitor via CP3KVMXT, VSE CPUMON via VSE2EDF).
zPSG (Processor Selection Guide) is an IBM internal tool, Windows PC based, designed to provide sizing approximations for IBM System z processors intended to host a new application, implemented using popular, commercially available software products (E.g. WebSphere, DB2, ODM, Linux Apache Server).
zSoftCap (Software Migration Capacity Planning Aid) is a generally available Windows PC based tool, designed to assess the effect on IBM System z processor capacity, when planning to upgrade to a more current operating system version and/or major subsystems versions (E.g. Batch, CICS, DB2, IMS, Web and System). zSoftCap assumes that the hardware configuration remains constant while the software version or release changes. The capacity implication of an upgrade for the software components can be assessed independently or in any combination.
zBNA (System z Batch Network Analysis) is a generally available Windows PC based tool, designed to understand the batch window, for example:
- Perform “what if” analysis and estimate the CPU upgrade effect on batch window
- Identify job time sequences based on a graphical view
- Filter jobs by attributes like CPU time / intensity, job class, service class, et al
- Review the resource consumption of all the batch jobs
- Drill down to the individual steps to see the resource usage
- Identify candidate jobs for running on different processors
- Identify jobs with speed of engine concerns (top tasks %)
BWATOOL (Batch Workload Analysis Tool) is an IBM internal tool, Windows PC based, designed to analyse SMF type 30 and 70 data, producing a report showing how long batch jobs run on the currently installed processor. Both CPU time and elapsed time are reported. Similar results can then be projected for any IBM System z processor model. Basic questions that can be answered by BWATOOL include:
- What jobs are good candidates for running on any given processor?
- How much would jobs benefit from running on a faster processor?
- For jobs within a critical path (batch window), what overall change in elapsed time might occur with a new processor?
zMCAT (Migration Capacity Analysis Tool) is an IBM internal tool, Windows PC based, designed to compare the performance of production workloads before and after migration of the system image to a new processor, even if the number of engines on the processor has changed. Workloads for which performance is to be analysed must be carefully chosen because the power comparison may vary considerably due to differing use of system services, I/O rate, instruction mix, storage reference patterns, et al. This is why customer experiences are unique from an internal throughput ratio (ITRR) based on LSPR benchmark data.
zTPM (Tivoli Performance Modeler) is an IBM internal tool, Windows PC based designed to let you build a model of a z/OS based IBM System z processor, and then run various “what if scenarios”. zTPM uses simulation techniques to let you model the impact of changes on individual workload performance. zTPM uses RMF or CMF reports as input. Based on these reports, zTPM can create summary charts showing LPAR as well as workload utilization. An automated Build function lets you build a model that represents the system for any reporting interval. Once the model is built, you can make changes to see the impact on workload performance. zTPM is also available as an IBM software product offering.
Therefore there are numerous tools available from IBM to assist their customers determine optimum Mainframe server capacity requirements. Some of these tools are generally available without engaging the IBM account team, but others are internal to IBM, and for that reason alone, Mainframe customers must engage their IBM Mainframe account team to participate in their capacity planning activities. Additionally, as the only supplier of Mainframe Servers, IBM have a wealth of knowledge and indeed a responsibility and generally a willingness to assist their customers deploy the right Mainframe server configuration from day 1.
As a customer, don’t be afraid to engage external 3rd parties to perform a sanity check of your thinking and activities, clearly IBM as they will be fulfilling your IBM Mainframe server order. However, consider engaging other capacity/performance and software licensing specialists as their experience incorporates many customers, as opposed to an insular view. Moreover, such 3rd parties probably utilize their own software tools or products to assist in this most important of disciplines.
In conclusion, as always, the worst question is the one not asked, and for this most fundamental of processes, not collaborating with your vendor and the wider community, might leave you as an individual exposed and isolated, and your company exposed to the consequences of an undersized or oversized Mainframe sever configuration…