IBM Z Mainframe Pre-Production Testing: Spring Into Stress Testing via zBuRST

For those of us in the Northern Hemisphere it’s been another long & cold Winter & for many, a time of pandemic lockdown.  As we enter Spring, we often associate this annual season with hope & new life & perhaps opportunity.  Henry Wadsworth Longfellow once wrote ”If Spring came but once in a century, instead of once a year, or burst forth with the sound of an earthquake, and not in silence, what wonder and expectation would there be in all hearts to behold the miraculous change”!  Let’s not carried away, but I have recently worked with an IBM Z customer to finally perform a Pre-Production full workload test via the IBM Z Business Resiliency Stress Test (zBuRST) solution…

In an ideal world, zBuRST would offer a much needed solution for all IBM Z Mainframe users with limited resource or budget to perform Pre-Production full workload testing activities.  However, in reality, there are some significant qualification caveats, primarily a minimum of 10,000 MIPS workload capacity & the need for latest generation z14 or newer Mainframe servers.  As with anything in business or indeed life, if you don’t ask, you will never know & there is some flexibility from an installed MIPS viewpoint via your local IBM account team.

IBM Z Business Resiliency Stress Test (zBuRST) is a solution that enables the use of spare IBM Z server physical resources to stress test changes at Production workload scale, allowing qualitative & quantitative validation of any Production change to safeguard the performance & resilience profile of IBM Z mission critical workloads.  For the avoidance of doubt, a Pre-Production test can be verified with a minimal data subset for qualitative purposes, but only a 100%+ data quantitative stress test will verify the SLA & KPI metrics required for a mission critical workload.  zBuRST only supports Pre-Production (DevTest) environments, which could include a GDPS internal environment, or a 3rd party DR supplier.  However, zBuRST cannot be used for any DR activity, testing or real-life invocation.  Hopefully most IBM Z mainframe users are savvy & have included some flexibility in their 3rd party DR provision contracts, allowing for periodic use of such facilities, not solely DR based.  This is not an unusual requirement & if you rely upon a 3rd party provider for IBM Z resilience, work with them to evolve your IBM Z resource provision service contracts accordingly.

From a big picture viewpoint, zBuRST reduces change risk, safeguarding business resiliency by enabling the detection and resolution of abnormalities and defects in a Pre-Production environment, which inevitably manifest business outages, disruptions, or slowdowns:

  • For IBM Z users with matching (identical) hardware in a standalone test or DR environment, zBuRST provides the ability to perform load or stress test of new IBM Z hardware features & upgraded functions.
  • For IBM Z users whose DR sites do not match their Production environment, the zBuRST objective is to enable critical workload (E.g. use all available resource to test the mission critical workloads) testing.

From an eligibility viewpoint, if your organization is currently testing with constrained IBM Z resources, prohibiting adequate Production workload sized testing, zBuRST improves workload resiliency:

  • Can your business scale reliably & conform to SLA & KPI Metrics during seasonal or ad-hoc peak processing demands (E.g. Year End, Black Friday, Cyber Monday, et al)?
  • Is your business mission critical application impacted by change aversion, with fear of disrupting Production stability?
  • Are your agile DevOps aspirations hampered by the legacy waterfall application development approach, taking too long to adequately test changes, or introduce new features, functions, for Production workloads?
  • Do elongated Production outages (I.E. Downtime) come at an excessive or prohibitive business cost?
  • Is it too complex to provision adequate local or 3rd party IBM Z resources for large scale volume or integration tests?

The zBuRST solution has a number of prerequisites & the primary considerations are:

  • zBuRST is an extension of the IBM Z Application Development and Test Solution (DevTest Solution).
  • zBuRST Tokens are discounted at 80% from the cost of On/Off CoD capacity.
  • zBuRST can be purchased or for systems with a minimum of 10,000 installed MIPS, for up to 50-100% of Production capacity.  All MIPS capacity must reside in the same country.
  • zBuRST pre-paid tokens can be purchased up to 100% of the additional capacity needed to support Production scale stress testing.
  • zBuRST tokens allow for up to 15 days of testing; tokens can be activated for any 15 calendar days, whether consecutive or not (E.g. Preform n stress tests of n days duration).
  • zBuRST tokens expire 5-years from the IBM Z server LICC “Withdrawal from Marketing” date.
  • For DevTest Solutions, zBuRST capacity can be purchased to increase the size of the DevTest environment up to the equivalent number of Production MIPS.
  • For DR machine usage, zBuRST tokens can be purchased up to the equivalent number of Production MIPS.
  • zBuRST tokens can only be installed & exclusively used on IBM Z hardware owned by the IBM Z user (customer); zBuRST is not available to 3rd party IBM Z resource service providers.
  • zBuRST tokens are pre-paid On/Off CoD LIC records.  There can only be one On/Off CoD record active at a time.  Post-paid On/Off CoD LIC records & zBuRST tokens cannot be active at the same time on the same machine.  There cannot be mixing of pre-paid & post-paid On/Off CoD LIC records.

zBuRST can deliver greater certainty & benefit for an IBM Z organization via:

  • Change risk eradication with Production workload stress testing, increasing business resiliency, customer satisfaction & operational efficiency.
  • Faster delivery of new business features & functions at reduced risk, enabling an agile DevOps application change environment.
  • Empowering IT personnel to safely test changes, at Production workload scale, in a DevTest environment, identifying problems or anomalies that might or typically only occur at scale.
  • Higher ROI for DR resource usage (E.g. Use for stress testing, not just for DR testing).
  • Increased & comprehensive application testing capabilities for a lower cost.

When working with my customer over the last few months, the real-life lessons learned were:

  • Collaborate with the 3rd party IBM Z resource supplier, to safeguard the use of their IBM Z server based upon a days as opposed to a DR testing usage approach.  For the avoidance of doubt, contract for n days, where those n days could be used for any number of Pre-Production testing & DR usage.
  • Engage with all ISV organizations from an FYI viewpoint, informing them of this DevTest approach, where their software will be used for Pre-Production testing purposes, allowing them to safely generate temporary software license codes accordingly, as & if required.
  • Work really closely with your IBM account team, as this customer was a ~9,000 MIPS user & find a win-win situation for all.  That could be the provision of anticipated White Space CPU capacity by IBM or as a committed IBM Z Mainframe user, maybe the 10,000 MIPS watermark is just too high.
  • Educate your Operations, Applications & Business units on this zBuRST options.  Some IBM Z users might have been restricted for years if not decades, not being able to perform a 100% data & CPU resource Pre-Production workload test.  The brainstorming, collaboration & good will that manifests itself, is one of those few occasions in IT where the users of your IT services are happy to be an integral part of the change process!

My final observation is a reflection on the last few months of my day-to-day activities.  For 2-3 days per week, I have been combining IT work with being “Captain Clipboard” at a local UK COVID-19 vaccination centre, which in itself, has been so rewarding.  To see the relief on people, especially those that are of a mature age, perhaps infirmed, feeling they can be a part of the wider community again.  The parallels are obvious, zBuRST can allow those IBM Z users prohibited from performing 100% data & CPU Pre-Production testing activities, the opportunity to advance their business.  However, unlike the COVID-19 vaccination, which for the fortunate developed countries, is available to all citizens, zBuRST does have some usage restrictions.  Perhaps it’s up to the wider IBM Z user community to encourage IBM to revisit & modify their approach, perhaps reducing the MIPS capacity requirements to 5,000 MIPS.  Wherever you’re based globally, if you’re a member of SHARE (USA) or GSE (Europe), et al, maybe reach out to your Large Systems representatives & see if the global collective from the IBM Z user organizations can encourage IBM to evolve their opportunity, enabling zBuRST solution usage to a larger majority if not all IBM Z Mainframe users.

Simplifying Db2 for z/OS CPU Optimization: Eradicating Inefficient SQL Processing

Without doubt the IBM Z Mainframe server is recognised as the de facto choice for storing mission critical System of record (SOR) data in database repositories for 92 of the top 100 global banks, 23 of the 25 top global airlines; the top 10 global insurers & ~70% of all Fortune 500 companies. ~80% of mission critical data is hosted by IBM Z Mainframe servers, processing 30+ Billion transactions per day, including ~90% of all credit card transactions. This data is accessed by ~1.3 Million CICS transactions per second, compared with a Google (mostly search) processing rate of ~70,000 transactions per second. Interestingly enough, despite processing so many mission critical transactions the IBM Z Mainframe server platform is only accountable for ~6.2% of global IT spend. One must draw one’s own conclusions as to why some IT professionals perceive the IBM Z Mainframe server as being a legacy platform, not worthy of consideration as a strategic IT server platform…

The digital transformation has delivered an exponential growth of data, typically classified as Cloud, Mobile & Social based. This current & ever-growing data source requires intelligent analytics to deliver meaningful business decisions, requiring agile application software delivery to gain competitive edge. This digital approach can sometimes deliver a myriad of micro business application changes, personalised for each & every customer, often delivering “pop-up” applications…

IBM Z Mainframe software costs are often criticized as being a major barrier to maintaining or indeed commissioning the platform. IBM have tried to minimize these costs with numerous sub-capacity pricing options over the last 30 years or so, but this is perceived by many as being overly complicated; although with a modicum of knowledge, a specialized personnel resource can easily control software costs. All that said, IBM have introduced Tailored Fit Pricing for IBM Z, in an attempt to simplify software cost management. A recent blog reviewed the Tailored Fit Pricing for IBM Z offering & whether you decide whether this IBM Z pricing mechanism is suitable for your organization, optimizing IBM Z CPU MSU/MIPS usage is mandatory. Recognizing that the IBM Z Mainframe server is the de facto database server for System of Record data, primarily via the Db2 subsystem, clearly optimizing Db2 CPU usage, whether OLTP transactions, typically via CICS, or the batch window, has been & always will be, worthwhile…

All too often, many IT disciplines can be classified with a generic 80/20 rule & typically data can be classified accordingly, where 80% of data is accessed 20% of the time & 20% of data is accessed 80% of the time. The challenge with such a blunt Rule of Thumb (ROT) is that it’s static, but it’s a good starting point. Ideally for any large data source, there would be a dynamic sampling mechanism that would identify the most active data, loading this into the highest speed memory resource to reduce I/O access times & therefore CPU usage. Dynamic management of such a data buffer would render the 80/20 rule extraneous to requirements, as each & every business has their own data access profile. However, a simple cost benefit & therefore Proof of Value (POV) analysis could ensue.

From a Db2 viewpoint, pre-defined structures such as buffer pools offer some relief in storing highly referenced data in a high-speed server memory resource, but this has a finite capacity versus performance benefit, not necessarily using the fastest memory structures available nor dynamically caching the most accessed data. The business considerations of not optimizing Db2 data access are:

  • Elongated Batch Processing: With ever increasing amounts of data to process & greater demands for 247365 availability & real-time access, data access optimization is fundamental for optimized service delivery, often measured by mission critical SLA & KPI metrics. Optimized batch processing is a fundamental requirement for acceptable customer facing business service delivery.
  • Slow Transaction Response Times: As the nature of customer requirements change, mobile device applications exponentially increasing the number of daily transactions, overall system resource capacity constraints are often stressed during peak hours. Optimized transaction response time is a fundamental requirement, being the most transparent service delivered to each & every end customer.

An easy but very expensive solution to remediate batch processing & transaction response issues is to provide more resources via a CPU server upgrade activity. A more sensible approach is to optimize the currently deployed resources, safeguarding that frequently accessed data is mostly if not always high speed cache resident, reducing the I/O processing overhead, reducing CPU usage, which in turn will optimize batch processing & transaction response times, while controlling associated IBM Z Mainframe server hardware & software costs.

The ubiquitous Db2 data access method is Structured Query Language (SQL) based, where IBM has their own implementation, SQL for Db2 for z/OS, which could be via the commonly used COBOL (EXEC SQL) programming language or a Db2 Connect API (E.g. ADO.NET, CLI, Embedded SQL, JDBC, ODBC, OLE DB, Perl, PHP, pureQuery, Python, Ruby, SQLJ). For Db2 Connect, there are 2 types of embedded SQL processing, static & dynamic SQL. Static SQL minimizes execution time by processing in advance. Though some relief is provided by Dynamic Statement Cache, dynamic SQL is processed when the SQL statement is submitted to the IBM Z Db2 server. Dynamic SQL is more flexible, but potentially slower. The decision to use static or dynamic SQL is typically made by the application programmer. There is a danger that Dynamic Statement Cache might be considered as a panacea for SQL CPU performance optimization, but as per any other performance activity, reviewing any historical changes is a good idea. The realm of possibility exists for the Db2 Subject Matter Expert (SME) to be pleasantly surprised that more often than not, there are still significant SQL CPU optimization opportunities…

From a generic Db2 viewpoint, with static SQL, you cannot change the form of SQL statements unless you make changes to the program. However, you can increase the flexibility of static statements by using host variables. Obviously, application program changes are not always desirable.

Dynamic SQL provides flexibility, if an application program needs to process many data types & structures, dictating that the program cannot define a model for each one, dynamic SQL overcomes this challenge. Dynamic SQL processing is facilitated by Query Management Facility (QMF), SQL Processing Using File Input (SPUFI) or the UNIX Systems Services (USS) Command Line Processor (CLP). Not all SQL statements are supported when using dynamic SQL. A Db2 application program that processes dynamic SQL accepts as input, or generates, an SQL statement in the form of a character string. Programming is simplified when you can structure programs not to use SELECT statements, or to use only those that return a known number of values of known types.

For Db2 data access, SQL statement processing requires an access path. The major SQL statement performance factors to consider are the amount of time that Db2 uses to determine the access path at run time & whether the access path is efficient. Db2 determines the SQL statement access path either when you bind the plan or package that contains the SQL statement or when the SQL statement executes. The repeating cost of preparing a dynamic SQL statement can make the performance worse when compared with static SQL statements. However, if you execute the same SQL statement often, using the dynamic SQL statement cache decreases the number of times dynamic statements must be prepared.

Typically, organizations have embraced static SQL over dynamic because static is more predictable, showing little or no change, while dynamic implies ever changing & unpredictable. Db2 performance optimization functions have been incorporated into base Db2 (E.g. Buffer Pools) & software products (E.g. IBM Db2 AI for z/OS, IBM Db2 for z/OS Optimizer, IBM Db2 Analytics Accelerator, IBM Z Table Accelerator, IZTA), with varying levels of benefit & cost. Ultimately IBM Z Mainframe customers need simple cost-efficient off-the-shelf solutions of a plug & play variety & without doubt, optimizing static SQL data processing is a pragmatic option for reducing Db2 subsystem CPU usage.

In Db2 Version 10, support for 64-bit run time was introduced, providing Virtual Storage Constraint Relief (VSCR), improving the vertical scalability of Db2 subsystems. With Db2 Version 11, the key z/Architecture benefit of 64-bit virtual addressing support was finally introduced, increasing capacity of central memory & virtual address spaces from 2 GB to 16 EB (Exabytes), eliminating most storage constraints. It therefore follows that any Db2 CPU performance optimization solution should also exploit the z/Architecture 64-bit feature, to support the ever-increasing data storage requirements of today’s digital workloads.

As we have identified, Db2 can consume significant amounts of z/OS CPU accessing & retrieving the same static frequently used data elements repetitively. Upon analysis, these static frequently used data elements are typically identified originating from a small percentage of Db2 tablespaces. Typically, at first glance these simple SQL programs are considered as low risk, but are repeatedly processed, often in peak processing times, consuming excessive CPU & increasing processing cost accordingly, typically z/OS Monthly Licence Charges (MLC) related. Db2 optimization tools for access path or buffer pool management provide some benefit, but this is not always significant & may require application changes. Patently there is a clear & present requirement for a simple plug & play solution, transparent to Db2 processing, maintaining an optimized high-performance in-memory cache of frequently used Db2 data, safeguarding data integrity in environments various, including SYSPLEX, Data Sharing, et al…

QuickSelect is a plug-in solution dynamically activated in a batch or OLTP environment (I.E. CICS, IMS/TM) intercepting repetitive SQL statements from Db2 application programs, storing the most active result set, not necessarily the entire tablespace, in a high-performance in-memory cache, returning to applications the same result set as per Db2, but much faster & using less CPU accordingly. QuickSelect is completely transparent to z/OS applications, eliminating any requirement to change/recompile/relink application source or rebind packages. QuickSelect processing can be switched on or off using a single keystroke, either defaulting to standard Db2 SQL processing or to benefit from the QuickSelect high-speed cache for optimized CPU resource usage.

The 64-bit QuickSelect server, implemented as a started task, intelligently caching data in self-managed memory above the bar, supporting up to 16 EB of memory, eliminating concerns of using any other commonly used storage areas (E.g. ECSA). The intelligent caching mechanism safeguards that only highly active data is retained, optimizing the associated cache memory size required.

QuickSelect caches frequently requested Db2 SQL result sets, returning these results to the application from QuickSelect cache, when a repetition of the same SQL is encountered. For data integrity purposes, QuickSelect immediately invalidates result sets upon detection of changes to underlying tables, implicitly validating each cache resident SQL result set. Changes to Db2 data by application programs are captured by a standard Db2 VALIDPROC process, attached to the typically small subset of frequently accessed tables of interest to QuickSelect. Db2 automatically activates the VALIDPROC routine whenever the table contents are changed by INSERT, DELETE, UPDATE or TRUNCATE statements, invalidating cached data from the updated tables automatically. For standard Db2 utilities such as LOAD/REPLACE, REORG/DISCARD & RECOVER, table-level changes are identified by a QuickSelect utility-trap, invalidating cached data from the updated tables automatically. QuickSelect also supports SYSPLEX & Data Sharing environments, supporting update activity via the same XCF functions & processes used by Db2.

QuickSelect delivers the following benefits:

  • CPU Savings: Meaningful reduction (E.g. 20%) in the Db2 SQL direct processing; 10%+ peak time CPU reduction is not uncommon.
  • Faster Processing: Optimized CPU usage delivers shorter batch processing & OLTP transaction response times, for related SLA & KPI objective compliance.
  • Transparent Implementation: No application changes required, source code, load module or Db2 package.
  • Survey Mode: Unobtrusive & minimal Db2 workload overhead data sampling to identify potential CPU savings from repetitive SQL & tables of interest, before implementation.
  • Staggered Deployment: Granular criteria (E.g. Job, Program, Table, Transaction, Etc.) implementation ability.
  • Reporting & Analytics: Extensive information detailing cache usage for Db2 programs & tables.

Since 1993 Db2 has evolved dramatically, in line with the evolution of the IBM Z Mainframe server. When considering today’s requirement for a digital world, processing ever increasing amounts of mission critical data, a base requirement to optimize CPU processing for Db2 SQL data access is mandatory. In a hybrid support environment where today’s IBM Z Mainframe support resource requires an even blend of technical & business skills, plug & play, easy-to-use & results driven solutions are required to optimize CPU usage, transparent to the subsystem & related application programs. QuickSelect is such a solution, fully exploiting 64-bit z/Architecture for ultimate scalability, identifying & resolving a common CPU consuming data access problem, for a mission critical resource, namely the Db2 subsystem, maintaining mission-critical System of Record data.

z/OS CPU optimization is a mandatory requirement for every organization, to reduce associated software & hardware costs & in theory, as a mandatory pre requisite for deploying the Tailored Fit Pricing for IBM Z pricing mechanism. Tailored Fit Pricing uses the previous 12 Months SCRT submissions to establish a baseline for MSU charging over a contracted period, typically 3 years. If there are any unused MSU resources, these are carried forward to the next year, but if those MSU resources remain unused at the end of the contracted period, they are lost, meaning the organization has paid too much. If the MSU resource exceeds the agreed Tailored Fit Pricing, excess MSU resources are charged at a discounted rate. Clearly achieving an optimal MSU baseline before embarking on a Tailored Fit Pricing contract is arguably mandatory & it therefore follows that optimizing CPU forever more, safeguards optimal z/OS MLC charging during the Tailored Fit Pricing contract. QuickSelect for Db2 is a seamless CPU optimization product that will perpetually deliver benefit, assisting organizations minimize their z/OS MLC costs, whether they continue to proactively manage the R4HA, submitting monthly SCRT reports or they embark on a Tailored Fit Pricing contract…

Optimize Your System z ROI with z Operational Insights (zOI)

Hopefully all System z users are aware of the Monthly Licence Charge (MLC) pricing mechanisms, where a recurring charge applies each month.  This charge includes product usage rights and IBM product support.  If only it was that simple!  We then encounter the “Alphabet Soup” of acronyms, related to the various and arguably too numerous MLC pricing mechanism options.  Some might say that 13 is an unlucky number and in this case, a System z pricing specialist would need to know and understand each of the 13 pricing mechanisms in depth, safeguarding the lowest software pricing for their organization!  Perhaps we could apply the unlucky word to such a resource.  In alphabetical order, the 13 MLC pricing options are AWLC, AEWLC, CMLC, EWLC, MWLC, MzNALC, PSLC, SALC, S/390 Usage Pricing, ULC, WLC, zELC and zNALC!  These mechanisms are commercial considerations, but what about the technical perspective?

Of course, System z Mainframe CPU resource usage is measured in MSU metrics, where the usage of Sub-Capacity allows System z Mainframe users to submit SCRT reports, incorporating Monthly License Charges (MLC) and IPLA software maintenance, namely Subscription and Support (S&S).  We then must consider the Rolling 4-Hour Average (R4HA) and how best to optimize MSU accordingly.  At this juncture, we then need to consider how we measure the R4HA itself, in terms of performance tuning, so we can minimize the R4HA MSU usage, to optimize cost, without impacting Production if not overall system performance.

Finally, we then have to consider that WLC has a ~17-year longevity, having been announced in October 2000 and in that time IBM have also introduced hardware features to assist in MSU optimization.  These hardware features include zIIP, zAAP, IFL, while there are other influencing factors, such as HyperDispatch, WLM, Relative Nest Intensity (RNI), naming but a few!  The Alphabet Soup continues…

In summary, since the introduction of WLC in Q4 2000, the challenge for the System z user is significant.  They must collect the requisite instrumentation data, perform predictive modelling and fully comprehend the impact of the current 13 MLC pricing mechanisms and their interaction with the ever-evolving System z CPU chip!  In the absence of such a simple to use reporting capability from IBM, there are a plethora of 3rd party ISV solutions, which generally are overly complex and require numerous products, more often than not, from several ISV’s.  These software solutions process the instrumentation data, generating the requisite metrics that allows an informed decision making process.

Bottom Line: This is way too complex and are there any Green Shoots of an alternative option?  Are there any easy-to-use data analytics based options for reducing MSU usage and optimizing CPU resources, which can then be incorporated into any WLC/MLC pricing considerations?

In February 2016 IBM launched their z Operational Insights (zOI) offering, as a new open beta cloud-based service that analyses your System z monitoring data.  The zOI objective is to simplify the identification of System z inefficiencies, while identifying savings options with associated implementation recommendations. At this juncture, zOI still has a free edition available, but as of September 2016, it also has a full paid version with additional functionality.

Currently zOI is limited to the CICS subsystem, incorporating the following functions:

  • CICS Abend Analysis Report: Highlights the top 10 types of abend and the top 10 most abend transactions for your CICS workload from a frequency viewpoint. The resulting output classifies which CICS transactions might abend and as a consequence, waste processor time.  Of course, the System z Mainframe user will have to fix the underlying reason for the CICS abend!
  • CICS Java Offload Report: Highlights any transaction processing workload eligible for IBM z Systems Integrated Information Processor (zIIP) offload. The resulting output delivers three categories for consideration.  #1; % of existing workload that is eligible for offload, but ran on a General Purpose CP.  #2; % of workload being offloaded to zIIP.  #3; % of workload that cannot be transferred to a zIIP.
  • CICS Threadsafe Report: Highlights threadsafe eligible CICS transactions, calculating the switch count from the CICS Quasi Reentrant Task Control Block (QR TCB) per transaction and associated CPU cost. The resulting output identifies potential CPU savings by making programs threadsafe, with the associated CICS subsystem changes.
  • CICS Region CPU Constraint: Highlights CPU constrained regions. CPU constrained CICS regions have reduced performance, lower throughput and slower transaction response, impacting business performance (I.E. SLA, KPI).  From a high-level viewpoint, the resulting output classifies CICS Region performance to identify whether they’re LPAR or QR constrained, while suggesting possible remedial actions.

Clearly the potential of zOI is encouraging, being an easy-to-use solution that analyses instrumentation data, classifies the best options from a quick win basis, while providing recommendations for implementation.  Having been a recent user of this new technology myself, I would encourage each and every System z Mainframe user to try this no risk IBM z Operational Insights (zOI) software offering.

The evolution for all System z performance analysis software solutions is to build on the comprehensive analysis solutions that have evolved in the last ~20+ years, while incorporating intelligent analytics, to classify data in terms of “Biggest Impact”, identifying “Potential Savings”, evolving MIPS measurement, to BIPS (Biggest Impact Potential Savings) improvements!

IBM have also introduced a framework of IT Operations Analytics Solutions for z Systems.  This suite of interconnected products includes zOI, IBM Operations Analytics for z Systems, IBM Common Data Provider for z/OS and IBM Advanced Workload Analysis Reporter (IBM zAware).  Of course, if we lived in a perfect world, without a ~20 year MLC and WLC longevity, this might be the foundation for all of our System z CPU resource usage analysis.  Clearly this is not the case for the majority of System z Mainframe customers, but zOI does offer something different, with zero impact, both from a system impact and existing software interoperability viewpoint.

Bottom Line: Optimize Your System z ROI via zOI, Evolving From MIPS Measurement to BIPS Improvements!

Mainframe Server Planning: Vendor Interaction

In the last few weeks I have encountered a couple of scenarios regarding Mainframe Server upgrades that have surprised me somewhat.  The first was at the annual UK GSE conference during November 2013, where one of the largest UK Mainframe customers stated “we had problems regarding the capacity sizing of the IBM Mainframe server installed and our vendor was not very helpful in resolving this challenge with us”.  The second was a European customer with 2 aging servers deployed, z9 BC, and they had asked their IBM Mainframe server vendor to provide an upgrade quotation.  The server vendor duly replied, providing a like-for-like upgrade quotation, 2 new zBC12 servers, which at first glance seemed to be a valid configuration.

The one thing in common for these 2 vastly different Mainframe customers, the first very large, the second quite small, is that inadvertently they didn’t necessarily engage their respective vendors with the best set of questions or indeed terms of reference; while the vendors might say “ask me no questions and I’ll tell you no lies”…

For the 2nd scenario, I was asked to quickly review the configuration provided.  My first observation was to consolidate both workloads on 1 server.  The customer confirmed, there was no business reason to have 2 servers, it was historic, and there wasn’t even a SYSPLEX between the 2 z9 BC servers.  The historic reason for the 2 z9 BC servers was the number of General Purpose (GP) engines supported.  My second observation was that software licensing could be simplified and optimized with aggregated MSU and use of the AEWLC pricing model.  So within ~1 hour, the customer had a significant potential to dramatically reduce costs.

We then suggested an analysis of their configuration with 2 software products, PerfTechPro for z/OS and zDynaCap.  They already had the SMF data, so using the simulation abilities of these products, the customer quickly confirmed they could consolidate their workloads onto 1 zBC12, deploy zIIP processors to offload ~15% CPU usage from GP, and control MSU allocation with zDynaCap, saving another ~10% of CPU.  For this customer, a small investment in software products reduced their server upgrade costs by ~€400,000 in year 1, with similar software savings, each and every year forever more.  Although they didn’t have the skills in-house from a Mainframe Capacity Planning and software licensing viewpoint, this customer did eventually ask the right questions, and the rest as they say is history!

No man or indeed Mainframe customer is an island, so don’t be afraid to ask questions of your vendors or business partners!

From a cost viewpoint, both long-term (TCO) and day 1 (TCA), the requirement to deploy the optimum Mainframe server configuration from a capacity viewpoint cannot be under estimated, both in terms of hardware costs, but more importantly, associated software costs.  It therefore follows that Mainframe Capacity Planning and Mainframe Software Licensing knowledge is imperative, but I’m not so sure there are that many Mainframe customers that have clearly defined job roles for such disciplines.

To generalize, always a dangerous thing, typically the larger Mainframe customer does have skilled and seasoned personnel for the Capacity Planning discipline, while the smaller Mainframe user might rely on a generic Systems Programmer or maybe even rely on their vendor to size their Mainframe servers.  From a Mainframe software licensing viewpoint, there seems to be no general rule-of-thumb, as sometimes the smaller customer has significant knowledge and experience, whereas the larger Mainframe customer might not.  Bottom Line: If the Mainframe customer doesn’t allocate the optimum capacity and associated software licensing metrics for their installation, problems will arise, probably for several years or more!

Are there any simple solutions or processes that can assist Mainframe customers?

The first and most simple observation is to engage your vendor and safeguard that they generate the final Mainframe server configuration that is used for Purchase Order activities.  For sure, the customer will have their capacity plan and perhaps a “draft” server configuration, but even in these instances, the vendor should QA this data, refining the bill of materials (E.g. Hardware) accordingly.  Therefore an iterative process occurs between customer and vendor, but the vendor is the one that confirms the agreed configuration is fit for purpose.  In the unlikely event there are challenges in the future, the customer can work with their vendor to find a solution, as opposed to the example stated above where the vendor left their customer somewhat isolated.

The second observation is leverage from the tools and processes that are available, both generally available and internal for vendor pre sales personnel.  Seemingly everybody likes something for nothing and so the ability to deploy “free” tools will appeal to most.

For Mainframe Capacity Planning, in addition to the standard in-house processes, whether bespoke (E.g. SAS, MXG, MICS based) or a packaged product, there are other additional tools available, primarily from IBM:

zPCR (Processor Capacity Reference) is a generally available Windows PC based tool, designed to provide capacity planning insight for IBM System z processors running various z/OS, z/VM, z/VSE, Linux, zAware, and CFCC workload environments on partitioned hardware.  Capacity results are based on IBM’s most recently published LSPR data for z/OS.  Capacity is presented relative to a user-selected Reference-CPU, which may be assigned any capacity scaling-factor and metric.

zCP3000 (Performance Analysis and Capacity Planning) is an IBM internal tool, Windows PC based, designed to for performance analysis and capacity planning simulations for IBM System z processors, running various SCP and workload environments.  It can also be used to graphically analyse logically partitioned processors and DASD configurations.  Input normally comes from the customer’s system logs via a separate tool (I.E. z/OS SMF via CP2KEXTR, VM Monitor via CP3KVMXT, VSE CPUMON via VSE2EDF).

zPSG (Processor Selection Guide) is an IBM internal tool, Windows PC based, designed to provide sizing approximations for IBM System z processors intended to host a new application, implemented using popular, commercially available software products (E.g. WebSphere, DB2, ODM, Linux Apache Server).

zSoftCap (Software Migration Capacity Planning Aid) is a generally available Windows PC based tool, designed to assess the effect on IBM System z processor capacity, when planning to upgrade to a more current operating system version and/or major subsystems versions (E.g. Batch, CICS, DB2, IMS, Web and System).  zSoftCap assumes that the hardware configuration remains constant while the software version or release changes.  The capacity implication of an upgrade for the software components can be assessed independently or in any combination.

zBNA (System z Batch Network Analysis) is a generally available Windows PC based tool, designed to understand the batch window, for example:

  • Perform “what if” analysis and estimate the CPU upgrade effect on batch window
  • Identify job time sequences based on a graphical view
  • Filter jobs by attributes like CPU time / intensity, job class, service class, et al
  • Review the resource consumption of all the batch jobs
  • Drill down to the individual steps to see the resource usage
  • Identify candidate jobs for running on different processors
  • Identify jobs with speed of engine concerns (top tasks %)

BWATOOL (Batch Workload Analysis Tool) is an IBM internal tool, Windows PC based, designed to analyse SMF type 30 and 70 data, producing a report showing how long batch jobs run on the currently installed processor.  Both CPU time and elapsed time are reported. Similar results can then be projected for any IBM System z processor model. Basic questions that can be answered by BWATOOL include:

  • What jobs are good candidates for running on any given processor?
  • How much would jobs benefit from running on a faster processor?
  • For jobs within a critical path (batch window), what overall change in elapsed time might occur with a new processor?

zMCAT (Migration Capacity Analysis Tool) is an IBM internal tool, Windows PC based, designed to compare the performance of production workloads before and after migration of the system image to a new processor, even if the number of engines on the processor has changed.  Workloads for which performance is to be analysed must be carefully chosen because the power comparison may vary considerably due to differing use of system services, I/O rate, instruction mix, storage reference patterns, et al.  This is why customer experiences are unique from an internal throughput ratio (ITRR) based on LSPR benchmark data.

zTPM (Tivoli Performance Modeler) is an IBM internal tool, Windows PC based designed to let you build a model of a z/OS based IBM System z processor, and then run various “what if scenarios”.  zTPM uses simulation techniques to let you model the impact of changes on individual workload performance.  zTPM uses RMF or CMF reports as input.  Based on these reports, zTPM can create summary charts showing LPAR as well as workload utilization.  An automated Build function lets you build a model that represents the system for any reporting interval.  Once the model is built, you can make changes to see the impact on workload performance.  zTPM is also available as an IBM software product offering.

Therefore there are numerous tools available from IBM to assist their customers determine optimum Mainframe server capacity requirements.  Some of these tools are generally available without engaging the IBM account team, but others are internal to IBM, and for that reason alone, Mainframe customers must engage their IBM Mainframe account team to participate in their capacity planning activities.  Additionally, as the only supplier of Mainframe Servers, IBM have a wealth of knowledge and indeed a responsibility and generally a willingness to assist their customers deploy the right Mainframe server configuration from day 1.

As a customer, don’t be afraid to engage external 3rd parties to perform a sanity check of your thinking and activities, clearly IBM as they will be fulfilling your IBM Mainframe server order.  However, consider engaging other capacity/performance and software licensing specialists as their experience incorporates many customers, as opposed to an insular view.  Moreover, such 3rd parties probably utilize their own software tools or products to assist in this most important of disciplines.

In conclusion, as always, the worst question is the one not asked, and for this most fundamental of processes, not collaborating with your vendor and the wider community, might leave you as an individual exposed and isolated, and your company exposed to the consequences of an undersized or oversized Mainframe sever configuration…

Application Performance Tuning – Why Bother?

With older generations of Mainframe Operating Systems, certainly MVS/XA and perhaps MVS/ESA, application performance tuning was a necessity, not an afterthought.  Quite simply, the cost of Mainframe resources, namely CPU, memory and disk, dictated that your mission critical business application might not perform to business requirements, unless you tuned your programming code.  Programmers, both of the system and application variety understood the bits and bytes of available programming languages (E.g. ASM, COBOL, PL/I) and Operating System (I.E. MVS), collaborating either via proactive process, or reactive problem solving.  With the continuing reduction of IT hardware component costs, the improvement in Operating Systems (E.g. 64-bit architecture) and newer programming languages (E.g. C, C++), it seems that application performing tuning is somewhat of an afterthought, but at what cost?

We all know that the cost of a Mainframe MIPS is significant, and although it might have reduced dramatically from a hardware viewpoint, from a software viewpoint, the cost remains largely static at ~£1,500-£3,500, per year, depending on your configuration.  So if your applications are burning several hundred if not several thousand extra MIPS unnecessarily, that’s very expensive indeed!  Additionally and just as importantly, a badly tuned system will manifest itself in slower transaction response times and longer batch jobs, if applicable, which could impact service availability.  So why is there a seeming reluctance to tune business applications, Mainframe resident or not?

If ever there was a functional IT area where the skills gap has never been wider, then application performance tuning is said skill, when comparing the salty old sea dog Mainframe dinosaur, with the newer Mainframe technician!

From an application development process viewpoint, where does the application performance tuning task live; before or after implementation?  The cynical amongst us will know; if it’s after implementation, there’s a strong likelihood said activity will never be performed!  If it’s before implementation, how many projects incorporate a meaningful stress test, or measure transaction response times versus an SLA or KPI metric?  Additionally, if the project is high-priority and/or running behind schedule, then performance testing is an activity that is easily removed…

Back in the good old days, the late 1980’s to early 1990’s, some application performance tuning tools did start to emerge, most notably Strobe.  Strobe was useful to even the most accomplished of system and application programmer personnel, and invaluable to less experienced personnel, and so arguably Strobe became the de facto software tool for tuning Mainframe applications.  However, later releases of MVS (E.g. OS/390 and z/OS), the non-event that was the Year 2000 (Y2K), seemed to remove the focus on and importance of application tuning.

Arguably most importantly of all, that software MIPS cost item, where Strobe and its competitors (E.g. ASG/BMC TriTune, CA Application Tuner, IBM APA, Macro4 ExpeTune, et al) will utilize even more CPU to capture diagnostic trace information, contributed to the demise of application performance tuning.  However, those companies that have undertaken such application tuning activities in the last decade or so are sitting pretty, having reduced the CPU (MIPS) resource consumed, lowering TCO and optimizing performance accordingly.  In the 21st Century, these software solutions are classified as Application Performance Management (APM) solutions.

Is there a better and easier way to stimulate an interest in the application performance tuning discipline?  If the desire exists to tune an application, lowering CPU MIPS usage, optimizing service performance, then the traditional tools and methods mentioned previously exist, but perhaps a new (or not so new) CPU performance data source exists…

With the introduction of the z10 server, a new function CPU MF (CPU Measurement Facility) was incorporated.  Let’s not forget, z10 is now an n-2 technology, having been superseded by the z196/z114 and the latest zBC12/zEC12 generation of servers.  So each and every committed Mainframe customer should be positioned to benefit from the CPU MF function.

CPU MF provides optional hardware assisted collections of information about logical CPU activity executed over a specified interval in selected Logical Partitions (LPARs).  The CPU MF counters function is intended to be run on a constant basis to collect long-term performance data (I.E. SMF Record 113), in a similar manner to how you collect other performance data.  I have previously briefly discussed how CPU MF SMF data can be used to increase Mainframe Server Capacity Planning efficiencies. 

The CPU MF sampling function is a short duration, precise function that identifies where CPU resources are being used, to help you improve application efficiency.  Put very simply, CPU MF sampling data has minimal CPU overhead (E.g. ~0.1-1.0%) when collecting data (I.E. z/OS Hardware Instrumentation Services – HIS), but this data can then be used to identify CPU “hot spots”, which can then be further analysed to identify the “areas of code” generating the high CPU usage.  However, it was forever thus, whether an APM tool, or CPU MF sampling data, high CPU usage can be identified, but the application programmer must undertake the task of optimizing the application code!

IBM have done a great job in providing CPU MF counters data, optimizing the Capacity Planning process with the SMF 113 record, and the realm of possibility exists with the sample data, but a software solution is required to analyse and summarize this data.

Currently there are very few if only one software solution that analyses CPU MF sample data, namely zHISR from Phoenix Software International.  zHISR interfaces directly with z/OS Hardware Instrumentation Services to collect data for hotspot analysis of customer, vendor, or operating system program execution.  zHISR features include:

  • Support for up to 128 simultaneous data collections events.  zHISR collections do not interfere with any HIS functions including sample or counter collection.
  • System console commands for many zHISR functions.
  • An Application Programming Interface to COBOL and Assembler for starting and stopping data collections. Collection lengths for API generated collections have a time range of one second or more.
  • Ability to schedule a collection with JCL so that collection starts when a given job or step begins.
  • Ability to store data collections as z/OS data sets or UNIX files.
  • Support for collections against CICS/TS transactions.
  • Analysis based on a time range within the collected data for a narrower spotlight on problem code.

An intuitive ISPF dialog allows the user to easily produce a CPU hot spots analysis, which can then be used for identifying the offending code sections.  The user can then drill down and highlight the high CPU CSECT and program offset (instruction), comparing with their Associated Data (ADATA), and thus the source programming instruction.  Therefore the skill required to perform analysis is minimal, as is the CPU overhead in collecting analysis data, and so eradicating the potential barriers when embarking on an application tuning initiative.  Furthermore, the actual cost of deploying the zHISR software is not onerous and so perhaps each and every committed Mainframe user can easily include application performance tuning into their application development lifecycle processes. 

zHISR has a UNIX file system interface that lets you navigate the system and browse or delete files.  With zHISR, users can start and stop hardware event data collections and view the status of the current or prior HIS run.  zHISR also includes a memory display/alter utility that lets you view main storage in the CPU you are logged on to.  If zIIPs are present and zHISR is defined as an authorized subsystem, nearly all of the CPU processing used by zHISR is redirected to a zIIP.

There are also instances, however few and far between, where Mainframe customers have written their own proprietary in-house OLTP (On-Line Transaction Processor) and Relational Database Management Subsystem (RDBMS), where traditional APM software tools can’t provide a solution, only interfacing with underlying subsystems (E.g. Adabas, CICS, DB2, IDMS, WebSphere, et al).  In these instances, CPU MF and zHISR offer a solution to help such customers, who probably face challenges when they upgrade their Mainframe servers, safeguarding software and application code is compatible with the new hardware, and ideally, exploits the latest functionality.

In conclusion, application performance tuning has to be a very important if not mandatory activity for the Mainframe Data Centre.  Whether via CPU MF or traditional APM software solutions, the cost reduction and performance improvement benefits of tuning should be compelling reasons to proactively engage in application tuning activities.  From a skills viewpoint, maybe the KISS (Keep It Simple Stupid) principle can apply, where CPU MF collects the data very simply and efficiently, complemented by zHISR, analysing the data in an intuitive and cost optimized manner.

So turning the subject matter on its head, Application Performance Tuning – Why Bother?  Why not!

Further information can be found from my z/OS Application Performance Tuning presentation, delivered at UK GSE in November 2012.