Simplifying Db2 for z/OS CPU Optimization: Eradicating Inefficient SQL Processing

Without doubt the IBM Z Mainframe server is recognised as the de facto choice for storing mission critical System of record (SOR) data in database repositories for 92 of the top 100 global banks, 23 of the 25 top global airlines; the top 10 global insurers & ~70% of all Fortune 500 companies. ~80% of mission critical data is hosted by IBM Z Mainframe servers, processing 30+ Billion transactions per day, including ~90% of all credit card transactions. This data is accessed by ~1.3 Million CICS transactions per second, compared with a Google (mostly search) processing rate of ~70,000 transactions per second. Interestingly enough, despite processing so many mission critical transactions the IBM Z Mainframe server platform is only accountable for ~6.2% of global IT spend. One must draw one’s own conclusions as to why some IT professionals perceive the IBM Z Mainframe server as being a legacy platform, not worthy of consideration as a strategic IT server platform…

The digital transformation has delivered an exponential growth of data, typically classified as Cloud, Mobile & Social based. This current & ever-growing data source requires intelligent analytics to deliver meaningful business decisions, requiring agile application software delivery to gain competitive edge. This digital approach can sometimes deliver a myriad of micro business application changes, personalised for each & every customer, often delivering “pop-up” applications…

IBM Z Mainframe software costs are often criticized as being a major barrier to maintaining or indeed commissioning the platform. IBM have tried to minimize these costs with numerous sub-capacity pricing options over the last 30 years or so, but this is perceived by many as being overly complicated; although with a modicum of knowledge, a specialized personnel resource can easily control software costs. All that said, IBM have introduced Tailored Fit Pricing for IBM Z, in an attempt to simplify software cost management. A recent blog reviewed the Tailored Fit Pricing for IBM Z offering & whether you decide whether this IBM Z pricing mechanism is suitable for your organization, optimizing IBM Z CPU MSU/MIPS usage is mandatory. Recognizing that the IBM Z Mainframe server is the de facto database server for System of Record data, primarily via the Db2 subsystem, clearly optimizing Db2 CPU usage, whether OLTP transactions, typically via CICS, or the batch window, has been & always will be, worthwhile…

All too often, many IT disciplines can be classified with a generic 80/20 rule & typically data can be classified accordingly, where 80% of data is accessed 20% of the time & 20% of data is accessed 80% of the time. The challenge with such a blunt Rule of Thumb (ROT) is that it’s static, but it’s a good starting point. Ideally for any large data source, there would be a dynamic sampling mechanism that would identify the most active data, loading this into the highest speed memory resource to reduce I/O access times & therefore CPU usage. Dynamic management of such a data buffer would render the 80/20 rule extraneous to requirements, as each & every business has their own data access profile. However, a simple cost benefit & therefore Proof of Value (POV) analysis could ensue.

From a Db2 viewpoint, pre-defined structures such as buffer pools offer some relief in storing highly referenced data in a high-speed server memory resource, but this has a finite capacity versus performance benefit, not necessarily using the fastest memory structures available nor dynamically caching the most accessed data. The business considerations of not optimizing Db2 data access are:

  • Elongated Batch Processing: With ever increasing amounts of data to process & greater demands for 247365 availability & real-time access, data access optimization is fundamental for optimized service delivery, often measured by mission critical SLA & KPI metrics. Optimized batch processing is a fundamental requirement for acceptable customer facing business service delivery.
  • Slow Transaction Response Times: As the nature of customer requirements change, mobile device applications exponentially increasing the number of daily transactions, overall system resource capacity constraints are often stressed during peak hours. Optimized transaction response time is a fundamental requirement, being the most transparent service delivered to each & every end customer.

An easy but very expensive solution to remediate batch processing & transaction response issues is to provide more resources via a CPU server upgrade activity. A more sensible approach is to optimize the currently deployed resources, safeguarding that frequently accessed data is mostly if not always high speed cache resident, reducing the I/O processing overhead, reducing CPU usage, which in turn will optimize batch processing & transaction response times, while controlling associated IBM Z Mainframe server hardware & software costs.

The ubiquitous Db2 data access method is Structured Query Language (SQL) based, where IBM has their own implementation, SQL for Db2 for z/OS, which could be via the commonly used COBOL (EXEC SQL) programming language or a Db2 Connect API (E.g. ADO.NET, CLI, Embedded SQL, JDBC, ODBC, OLE DB, Perl, PHP, pureQuery, Python, Ruby, SQLJ). For Db2 Connect, there are 2 types of embedded SQL processing, static & dynamic SQL. Static SQL minimizes execution time by processing in advance. Though some relief is provided by Dynamic Statement Cache, dynamic SQL is processed when the SQL statement is submitted to the IBM Z Db2 server. Dynamic SQL is more flexible, but potentially slower. The decision to use static or dynamic SQL is typically made by the application programmer. There is a danger that Dynamic Statement Cache might be considered as a panacea for SQL CPU performance optimization, but as per any other performance activity, reviewing any historical changes is a good idea. The realm of possibility exists for the Db2 Subject Matter Expert (SME) to be pleasantly surprised that more often than not, there are still significant SQL CPU optimization opportunities…

From a generic Db2 viewpoint, with static SQL, you cannot change the form of SQL statements unless you make changes to the program. However, you can increase the flexibility of static statements by using host variables. Obviously, application program changes are not always desirable.

Dynamic SQL provides flexibility, if an application program needs to process many data types & structures, dictating that the program cannot define a model for each one, dynamic SQL overcomes this challenge. Dynamic SQL processing is facilitated by Query Management Facility (QMF), SQL Processing Using File Input (SPUFI) or the UNIX Systems Services (USS) Command Line Processor (CLP). Not all SQL statements are supported when using dynamic SQL. A Db2 application program that processes dynamic SQL accepts as input, or generates, an SQL statement in the form of a character string. Programming is simplified when you can structure programs not to use SELECT statements, or to use only those that return a known number of values of known types.

For Db2 data access, SQL statement processing requires an access path. The major SQL statement performance factors to consider are the amount of time that Db2 uses to determine the access path at run time & whether the access path is efficient. Db2 determines the SQL statement access path either when you bind the plan or package that contains the SQL statement or when the SQL statement executes. The repeating cost of preparing a dynamic SQL statement can make the performance worse when compared with static SQL statements. However, if you execute the same SQL statement often, using the dynamic SQL statement cache decreases the number of times dynamic statements must be prepared.

Typically, organizations have embraced static SQL over dynamic because static is more predictable, showing little or no change, while dynamic implies ever changing & unpredictable. Db2 performance optimization functions have been incorporated into base Db2 (E.g. Buffer Pools) & software products (E.g. IBM Db2 AI for z/OS, IBM Db2 for z/OS Optimizer, IBM Db2 Analytics Accelerator, IBM Z Table Accelerator, IZTA), with varying levels of benefit & cost. Ultimately IBM Z Mainframe customers need simple cost-efficient off-the-shelf solutions of a plug & play variety & without doubt, optimizing static SQL data processing is a pragmatic option for reducing Db2 subsystem CPU usage.

In Db2 Version 10, support for 64-bit run time was introduced, providing Virtual Storage Constraint Relief (VSCR), improving the vertical scalability of Db2 subsystems. With Db2 Version 11, the key z/Architecture benefit of 64-bit virtual addressing support was finally introduced, increasing capacity of central memory & virtual address spaces from 2 GB to 16 EB (Exabytes), eliminating most storage constraints. It therefore follows that any Db2 CPU performance optimization solution should also exploit the z/Architecture 64-bit feature, to support the ever-increasing data storage requirements of today’s digital workloads.

As we have identified, Db2 can consume significant amounts of z/OS CPU accessing & retrieving the same static frequently used data elements repetitively. Upon analysis, these static frequently used data elements are typically identified originating from a small percentage of Db2 tablespaces. Typically, at first glance these simple SQL programs are considered as low risk, but are repeatedly processed, often in peak processing times, consuming excessive CPU & increasing processing cost accordingly, typically z/OS Monthly Licence Charges (MLC) related. Db2 optimization tools for access path or buffer pool management provide some benefit, but this is not always significant & may require application changes. Patently there is a clear & present requirement for a simple plug & play solution, transparent to Db2 processing, maintaining an optimized high-performance in-memory cache of frequently used Db2 data, safeguarding data integrity in environments various, including SYSPLEX, Data Sharing, et al…

QuickSelect is a plug-in solution dynamically activated in a batch or OLTP environment (I.E. CICS, IMS/TM) intercepting repetitive SQL statements from Db2 application programs, storing the most active result set, not necessarily the entire tablespace, in a high-performance in-memory cache, returning to applications the same result set as per Db2, but much faster & using less CPU accordingly. QuickSelect is completely transparent to z/OS applications, eliminating any requirement to change/recompile/relink application source or rebind packages. QuickSelect processing can be switched on or off using a single keystroke, either defaulting to standard Db2 SQL processing or to benefit from the QuickSelect high-speed cache for optimized CPU resource usage.

The 64-bit QuickSelect server, implemented as a started task, intelligently caching data in self-managed memory above the bar, supporting up to 16 EB of memory, eliminating concerns of using any other commonly used storage areas (E.g. ECSA). The intelligent caching mechanism safeguards that only highly active data is retained, optimizing the associated cache memory size required.

QuickSelect caches frequently requested Db2 SQL result sets, returning these results to the application from QuickSelect cache, when a repetition of the same SQL is encountered. For data integrity purposes, QuickSelect immediately invalidates result sets upon detection of changes to underlying tables, implicitly validating each cache resident SQL result set. Changes to Db2 data by application programs are captured by a standard Db2 VALIDPROC process, attached to the typically small subset of frequently accessed tables of interest to QuickSelect. Db2 automatically activates the VALIDPROC routine whenever the table contents are changed by INSERT, DELETE, UPDATE or TRUNCATE statements, invalidating cached data from the updated tables automatically. For standard Db2 utilities such as LOAD/REPLACE, REORG/DISCARD & RECOVER, table-level changes are identified by a QuickSelect utility-trap, invalidating cached data from the updated tables automatically. QuickSelect also supports SYSPLEX & Data Sharing environments, supporting update activity via the same XCF functions & processes used by Db2.

QuickSelect delivers the following benefits:

  • CPU Savings: Meaningful reduction (E.g. 20%) in the Db2 SQL direct processing; 10%+ peak time CPU reduction is not uncommon.
  • Faster Processing: Optimized CPU usage delivers shorter batch processing & OLTP transaction response times, for related SLA & KPI objective compliance.
  • Transparent Implementation: No application changes required, source code, load module or Db2 package.
  • Survey Mode: Unobtrusive & minimal Db2 workload overhead data sampling to identify potential CPU savings from repetitive SQL & tables of interest, before implementation.
  • Staggered Deployment: Granular criteria (E.g. Job, Program, Table, Transaction, Etc.) implementation ability.
  • Reporting & Analytics: Extensive information detailing cache usage for Db2 programs & tables.

Since 1993 Db2 has evolved dramatically, in line with the evolution of the IBM Z Mainframe server. When considering today’s requirement for a digital world, processing ever increasing amounts of mission critical data, a base requirement to optimize CPU processing for Db2 SQL data access is mandatory. In a hybrid support environment where today’s IBM Z Mainframe support resource requires an even blend of technical & business skills, plug & play, easy-to-use & results driven solutions are required to optimize CPU usage, transparent to the subsystem & related application programs. QuickSelect is such a solution, fully exploiting 64-bit z/Architecture for ultimate scalability, identifying & resolving a common CPU consuming data access problem, for a mission critical resource, namely the Db2 subsystem, maintaining mission-critical System of Record data.

z/OS CPU optimization is a mandatory requirement for every organization, to reduce associated software & hardware costs & in theory, as a mandatory pre requisite for deploying the Tailored Fit Pricing for IBM Z pricing mechanism. Tailored Fit Pricing uses the previous 12 Months SCRT submissions to establish a baseline for MSU charging over a contracted period, typically 3 years. If there are any unused MSU resources, these are carried forward to the next year, but if those MSU resources remain unused at the end of the contracted period, they are lost, meaning the organization has paid too much. If the MSU resource exceeds the agreed Tailored Fit Pricing, excess MSU resources are charged at a discounted rate. Clearly achieving an optimal MSU baseline before embarking on a Tailored Fit Pricing contract is arguably mandatory & it therefore follows that optimizing CPU forever more, safeguards optimal z/OS MLC charging during the Tailored Fit Pricing contract. QuickSelect for Db2 is a seamless CPU optimization product that will perpetually deliver benefit, assisting organizations minimize their z/OS MLC costs, whether they continue to proactively manage the R4HA, submitting monthly SCRT reports or they embark on a Tailored Fit Pricing contract…

Tailored Fit Pricing for IBM Z: A Viable R4HA Alternative?

In a previous blog entry, I discussed the pros and cons of IBM Z Solution Consumption License Charges (SCLC): A Viable R4HA Alternative.  Recently on 14 May 2019 IBM announced Tailored Fit Pricing for IBM Z, introducing two comprehensive alternatives to the Rolling 4 Hour Average (R4HA) based pricing model, for both new and existing workloads, with a General Availability (GA) date of 21 June 2019.

To digress a little, for those of us in the Northern Hemisphere, June 21 is considered as the Summer Solstice, where the date might vary, one day before or after, namely June 20-22.  You can then further complicate things with confusing Midsummer’s Day with the Summer Solstice and Astronomical versus Meteorological seasons, but whatever, it’s a significant timeframe, with many traditions throughout Europe.  Once again, Midsummer’s Day can be any date between June 19 and June 24.  Having considered my previous review of SCLC and now the Tailored Fit Pricing announcement, I was reminded of a quotation from A Midsummer Night’s Dream by William Shakespeare, “so quick bright things come to confusion”…

The primary driver for Tailored Fit Pricing for IBM Z is to help mitigate unpredictable costs whilst continuing to deliver optimal business outcomes in the world of Digital Transformation & Hybrid Cloud.  Depending on the type of workload activity in your organisation, a tailored pricing model may be far more competitive when compared to pay-as-you-go schemes that have been typical on many x86 based cloud implementations.  Combining technology with cost competitive commercial models delivered through Tailored Fit Pricing strongly challenges the mindset that IT growth must be done on a public cloud in order to make economic sense.  Put another way, this is the IBM Marketing stance to compete with the ever-growing presence of the major 3 Public Cloud providers, namely Amazon Web Services (AWS), Microsoft Azure and Google Cloud, totalling ~60% of Public Cloud customer spend.

In essence a significant portion of The Tailored Fit Pricing for IBM Z announcement is a brand renaming activity, where the Container Pricing for IBM Z name changes to Tailored Fit Pricing for IBM Z.  The IBM Application Development and Test Solution and the IBM New Application Solution that were previously introduced under the Container Pricing for IBM Z name, are now offered under the Tailored Fit Pricing for IBM Z name.  Tailored Fit Pricing for IBM Z pricing introduces two new pricing solutions for IBM Z software running on the z/OS platform.  The Enterprise Consumption and Enterprise Capacity Solutions are both tailored to your environment and offer flexible deployment options:

  • Enterprise Consumption Solution: a tailored usage-based pricing model where compute power is measured on a per MSU basis.  MSU consumption is aggregated hourly, providing a measurement system better aligned with actual system utilization, when compared with R4HA.  Software charges are based on the total annual MSU usage, assisting users with seasonal workload pattern variations.  A total MSU used charging mechanism is designed to remove MSU capping, optimizing SLA and response time metrics accordingly.
  • Enterprise Capacity Solution: a tailored full-capacity licensing model, offering the maximum level of cost predictability.  Charges are based on the overall size of the physical hardware environment.  Charges are calculated based on the estimated mix of workloads running, while providing the flexibility to vary actual usage across workloads. Charges include increased capacity for development and test environments and reduced pricing for all types of workload growth.  An overall size charging mechanism is designed to remove MSU capping, optimizing SLA and response time metrics accordingly.

The high-level benefits associated with the Enterprise Consumption and Enterprise Capacity solutions can be summarized as:

  • Licensing models that eradicate cost control capping activities, enabling clients to fully exploit the CPU capacity installed
  • Increased CPU capacity for Development and Test (DevTest) environments, enabling clients to dramatically increase DevTest activities, without cost consideration
  • Optimized and potential lower pricing for all types of workload growth, without requiring additional IBM approvals, or additional tagging and tracking

Enterprise Solution License Charges (ESLC) are a new type of Monthly License Charge (MLC) pricing methodology for Enterprise Solutions, tailored for each individual and specific client environment and related requirements.  It was forever thus, whatever the pricing mechanism, the ubiquitous z/OS, CICS, Db2, MQ, IMS, WAS software products are the major considerations for MLC pricing mechanisms.  The Key prerequisites for Tailored Fit Pricing for IBM Z are IBM z14 Models M01-M05 or z14 Model ZR1, running the z/OS 2.2 and higher Operating System.

For new Mission Critical workloads and existing or new Development and Test (DevTest) workloads, Tailored Fit Pricing for IBM Z is clearly a great fit.  The restriction of z14 hardware is a little disappointing, where Solution Consumption License Charges (SCLC) included support for the z13 and z13s server.  I’m guessing that IBM are relying upon a significant z14 field upgrade programme in the next few years, largely based upon the Pervasive Encryption (PE) functionality.  However, for those customers that have run the IBM Z platform for decades and might have invested in cost optimization activities, including but not limited to capping, the jump to these new Enterprise Solution License Charges (ESLC) might take a while…

We could review this isolated announcement to the nth degree, but I’m not sure how productive that might be.  For sure, there is always devil in the detail, but sometimes we need to consider the big picture…

As a baby boomer myself, I see my role as passing on my knowledge to the next generations, although still wanting and striving to learn each and every day.  At this time of year, where the weather is better and roads drier, I drive my classic car a lot more and I enjoy the ability to tune the engine with my ears, hands, eyes and a strobe; getting my hands dirty!  I wonder whether the future of the IBM Z platform ecosystem is somewhat analogous to that of the combustion engine.  Several decades ago, electronics and Engine Management Systems became common place for combustion engines and now the ubiquitous laptop is plugged into the engine bay, to retrieve codes to diagnose and in theory repair faults.  For the consumer, arguably a good thing from a vehicle reliability viewpoint, but from a mechanical engineer viewpoint, have these folks become deskilled?  If you truly want your modern vehicle fixed, you will probably need a baby boomer to do this, one that doesn’t rely on a laptop, but their experience.  Although a sweeping generalisation, as there are always exceptions to any rule, the same applies to the IBM Z environment, where it was forever thus, compute power (MSU/MIPS) optimization relies upon a tune, tune, tune approach.

Whether R4HA or Full Capacity based, software cost charges will only be truly optimized if the system and ultimately application code is tuned.  A possible potential downside of not paying close attention to MSU usage, especially when considering these Enterprise Solution License Charges, is a potential isolated activity to “fix” IBM Z software costs forevermore, based upon a high MSU baseline.  Just as the combustion engine management systems simplify fault or diagnostic data collection, they don’t necessarily highlight that the vehicle owner left their cargo carrier on the vehicle roof, harming fuel efficiency.  A crude analogy for sure, but experience counts for a lot.  We have all probably encountered the Old Engineer & The Hammer story before and ultimately it’s incumbent upon us all, to safeguard that we don’t enable a rapid “death of expertise”.  Once the skills are lost, they’re lost.  Whether iStrobe from Compuware, TurboTune from Critical Path Software Inc. or the myriad of other System Monitor options, engage the experienced engineer and safeguard MSU optimization.  At this point, deploy the latest IBM Z pricing mechanism, namely Tailored Fit Pricing for IBM Z, and you will have truly optimized software costs…

IBM Z Solution Consumption License Charges (SCLC): A Viable R4HA Alternative?

In the same timeframe as the recent IBM z14 and LinuxONE Enhanced Driver Maintenance (GA2) hardware announcements, there were modifications to the Container Pricing for IBM Z mechanism, namely Solution Consumption License Charges (SCLC) and the Application Development and Test Solution.  Neither of these new pricing models are dependent on the IBM z14 GA2 hardware announcement, but do require the latest IBM z13, IBM z13s, IBM z14 or IBM z14 ZR1 servers and z/OS V2.2 and upwards for collocated workloads and z/OS V2.1 and upwards for separate LPAR workloads.

For many years, IBM themselves have attempted to introduce new sub-capacity software pricing models to encourage new workloads to the IBM Z server and associated z/OS operating system.  Some iterations include z Systems New Application License Charges (zNALC), Integrated Workload Pricing (IWP) and z Systems Collocated Application Pricing (zCAP), naming but a few.  The latest iteration appears to be Container Pricing for IBM Z, announced in July 2017, with three options, namely the aforementioned Application Development and Test Solution, the New Application Solution and Payments Pricing Solution.  This recent October 2018 announcement adapts the New Application Solution option, classifying it as the Solution Consumption License Charges (SCLC) mechanism.  For the purposes of this blog, we will concentrate on the SCLC mechanism, although the potential benefits of the Application Development and Test Solution for non-Production workloads should not be under estimated…

From a big picture viewpoint, z/OS, CICS, Db2, IMS and MQ are the most expensive IBM Z software products and of course, IBM Mainframe users have designed their environments to reduce software costs accordingly, initially with sub-capacity and then Workload Licence Charging (WLC) and the associated Rolling 4 Hour Average (R4HA).  Arguably CPU MSU management is a specialized capacity and performance management discipline in itself, with several 3rd party ISV options for optimized soft-capping (I.E. AutoSoftCapping, iCap, zDynaCap/Dynamic Capacity Intelligence).  IBM thinks that this MSU management discipline has thwarted new workloads being added to the IBM Z ecosystem, unless there was a mandatory requirement for CICS, Db2, IMS or MQ.  Hence this recent approach of adding new and qualified workloads, outside of the traditional R4HA mechanism.  These things take time and with a few tweaks and repairs, maybe the realm of possibility exists and perhaps the Solution Consumption License Charges (SCLC) is a viable and eminently usable option?

SCLC offers a new pricing metric when calculating MLC software costs for qualified Container Pricing workloads.  SCLC is based on actual MSU consumption, as opposed to the traditional R4HA WLC metric.  SCLC delivers a pure and consistent metered usage model, where the MSU resource used is charged at the same flat rate, regardless of hourly workload peaks, delivering pricing predictability.  Therefore, SCLC directly reflects the total workload cost, regardless of consumption, on a predictable “pay for what you use” basis.  This is particularly beneficial for volatile workloads, which can significantly impact WLC costs associated with the R4HA.  There are two variations of SCLC for qualified and IBM verified New Applications (NewApp):

  • The SCLC pay-as-you-go option offers a low priced, per-MSU model for software programs within the NewApp Solution, with no minimum financial commitment.
  • The SCLC-committed MSU option offers a saving of 20% over the pay-as-you-go price points, with a monthly minimum MSU commitment of just 25,000 MSUs.

SCLC costs are calculated and charged per MSU on an hourly basis, aggregated over an entire (SCRT) month.  For example, if a NewApp solution utilized 50 MSU in hour #1, 100 MSU in hour #2 and 50 MSU in hour #3, the total chargeable MSU for the 3-hour period would be 200 MSU.  Hourly periods continue to be calculated this way over the entire month, providing a true, usage-based cost model.  We previously reviewed Container Pricing in a previous blog entry from August 2017.  At first glance, the opportunity for a predictable workload cost seems evident, but what about the monthly MSU commitment of 25,000 MSU?

Let’s try and break this down at the simplest level, using the SCLC hourly MSU base metric.  In a fixed 24-hour day and an arbitrary 30-day month, there would be 720 single MSU hours.  To qualify for the 25,000 MSU commitment, the hourly workload would need to average ~35 MSU (~300 MIPS) in size.  For the medium and large sized business, generating a 35 MSU workload isn’t a consideration, but probably is for the smaller IBM Mainframe user.  The monthly commitment also becomes somewhat of a challenge, as a calendar month is 28/29 days, once per year, 30 days, four times per year and 31 days, seven times per year.  This doesn’t really impact the R4HA, but for a pay per MSU usage model, the number of MSU hours per month does matter.  One must draw one’s own conclusions, but it’s clearly easier to exceed the 25,000 MSU threshold in a 31-day month, when compared with a 30, 29 or 28 day month!  From a dispassionate viewpoint, I can’t see any reason why the 20% discount can’t be applied when the 25,000 MSU threshold is exceeded, without a financial commitment form the customer.  This would be a truly win-win situation for the customer and IBM, as the customer doesn’t have to concern themselves about exceeding the arbitrary 25,000 MSU threshold and IBM have delivered a usable and attractive pricing mechanism for the desired New Application workload.

The definition of a New Application workload is forever thus, based upon a qualified and verified workload by IBM, assigned a Solution ID for SCRT classification purposes, integrating CICS, Db2, MQ, IMS or z/OS software.  Therefore existing workloads, potentially classified as legacy will not qualify for this New Application status, but any application re-engineering activities should consider this lower price per MSU approach.  New technologies such as blockchain could easily transform a legacy application and benefit from New Application pricing, while the implementation of DevOps could easily transform non-Production workloads into benefiting from the Application Development and Test Solution Container Pricing mechanism.

In conclusion, MSU management is a very important discipline for any IBM Z user and any lower cost MSU that can be eliminated from the R4HA metric delivers improved TCO.  As always, the actual IBM Z Mainframe user themselves are ideally placed to interact and collaborate with IBM and perhaps tweak these Container Pricing models to make them eminently viable for all parties concerned, strengthening the IBM Z ecosystem and value proposition accordingly.

IBM z14: Pervasive Encryption & Container Pricing

On 17 July 2017 IBM announced the z14 server as “the next generation of the world’s most powerful transaction system, capable of running more than 12 billion encrypted transactions per day.  The new system also introduces a breakthrough encryption engine that, for the first time, makes it possible to pervasively encrypt data associated with any application, cloud service or database all the time”.

At first glance, a cursory review of the z14 announcement might just appear as another server upgrade release, but that could be a costly mistake by the reader.  There are always subtle nuisances in any technology announcement, while finding them and applying them to your own business can sometimes be a challenge.  In this particular instance, perhaps one might consider “Persuasive Encryption & Contained Pricing”…

When IBM releases a new generation of z Systems server, many of us look to the “feeds and speeds” data and ponder how that might influence our performance and capacity profiles.  IBM state the average z14 speed compared with a z13 increase by ~10% for 6-way servers and larger.  As per usual, there are software Technology Transition Offering (TTO) discounts ranging from 6% to 21% for z14 only sites.  However, in these times where workload profiles are rapidly changing and evolving, it’s sometimes easy to overlook that IBM have to consider the holistic position of the IBM Z world.  Quite simply, IBM has many divisions, Hardware, Software, Services, et al.  Therefore there has to be interaction between the hardware and software divisions and in this instance, IBM have delivered a z14 server that is security focussed, with their Pervasive Encryption functionality.

Pervasive Encryption provides a simple and transparent approach for z Systems security, enabling the highest levels of data encryption for all data usage scenarios, for example:

  • Processing: When retrieved from files and processed by applications
  • In Flight: When being transmitted over internal and external networks
  • At Rest: When stored in database structures or files
  • In Store: When stored in magnetic storage media

Pervasive Encryption simplifies and reduces costs associated when protecting data by policy (I.E. Subset) or En Masse (I.E. All Of The Data, All Of the Time), achieving compliance mandates.  When considering the EU GDPR (European Union General Data Protection Regulation) compliance mandate, companies must notify relevant parties within 72 hours of first having become aware of a personal data breach.  Additionally organizations can be fined up to 4% of annual global turnover or €20 Million (whichever is greater), for any GDPR breach unless they can demonstrate that data was encrypted and keys were protected.

To facilitate this new approach for encryption, the IBM z14 infrastructure incorporates several new capabilities integrated throughout the technology stack, including Hardware, Operating System and Middleware.  Integrated CPU chip cryptographic acceleration is enhanced, delivering ~600% increased performance when compared with its z13 predecessor and ~20 times faster than competitive server platforms.  File and data set encryption is optimized within the Operating Systems (I.E. z/OS), safeguarding transparent and optimized encryption, not impacting application functionality or performance.  Middleware software subsystems including DB2 and IMS leverage from these Pervasive Encryption techniques, safeguarding that High Availability databases can be transitioned to full encryption without stopping the database, application or subsystem.

Arguably IBM had to deliver this type of security functionality for its top tier z Systems customers, as inevitably they would be impacted by compliance mandates such as GDPR.  Conversely, the opportunity to address the majority of external hacking scenarios with one common approach is an attractive proposition.  However, as always, the devil is always in the detail, and given an impending deadline date of May 2018 for GDPR compliance, I wonder how many z Systems customers could implement the requisite z14 hardware and related Operating System (I.E. z/OS) and Subsystem (I.E. CICS, DB2, IMS, MQ, et al) .upgrades before this date?  From a bigger picture viewpoint, Pervasive Encryption does offer the requisite functionality to apply a generic end-to-end process for securing all data, especially Mission Critical data…

Previously we have considered the complexity of IBM z Systems pricing mechanisms and in theory, the z14 announcement tried to simplify some of these challenges by building upon and formalizing Container Pricing.  Container Pricing is intended to greatly simplify software pricing for qualified collocated workloads, whether collocated with other existing workloads on the same LPAR, deployed in a separate LPAR or across multiple LPARs.  Container pricing allows the specified workload to be separately priced based on a variety of metrics.  New approved z/OS workloads can be deployed collocated with other sub-capacity products (I.E. CICS, DB2, IMS, MQ, z/OS) without impacting cost profiles of existing workloads.

As per most new IBM z Systems pricing mechanisms of late, there is a commercial collaboration and exchange required between IBM and their customer.  Once a Container Pricing solution is agreed between IBM and their customer, for an agreed price, an IBM Sales order is initiated, triggering the creation of an Approved Solution ID.  The IBM provided solution ID is a 64-character string representing an approved workload with an entitled MSU capacity, representing a Full Capacity Pricing Container used for billing purposes.

Previously we considered the importance of WLM for managing z/OS workloads and its interaction with soft-capping, and this is reinforced with this latest IBM Container Pricing mechanism.  The z/OS Workload Manager (WLM) enables Container Pricing using a resource classified as the Tenant Resource Group (TRG), defining the workload in terms of address spaces and independent enclaves.  The TRG, combined with a unique Approved Solution ID, represents the IBM approved solution.  As per standard SCRT processing, workload instrumentation data is collected, safeguarding that this workload profile does not directly impact the traditional peak LPAR Rolling Four-Hour Average (R4HA).  The TRG also allows the workload to be metered and optionally capped, independent of other workloads that are running collocated in the LPAR.

MSU utilization of the defined workload is recorded by WLM and RMF, subsequently processed by SCRT to subtract the solution MSU capacity from the LPAR R4HA.  The solution can then be priced independently, based on MSU resource consumed by the workload, or based upon other non-MSU values, specifically a Business Value Metric (E.g. Number of Payments).  Therefore Container Pricing is much simpler and much more flexible than previous IBM collocated workload mechanism, namely IWP and zCAP.

Container Pricing eliminates the requirement to commission specific new environments to optimize MLC pricing.  By deploying a standard IBM process framework, new workloads can be commissioned without impacting the R4HA of collocated workloads, being deployed as per business requirements, whether on the same LPAR, a separate LPAR, or dispersed across multiple LPAR structures.  Quite simply, the standard IBM process framework is the Approved Solution ID, associating the client based z/OS system environment to the associated IBM sales contract.

In this first iteration release associated with the z14 announcement, Container Pricing can be deployed in the following three solution based scenarios:

  • Application Development and Test Solution: Add up to 3 times more capacity to existing Development and Test environments without any additional monthly licensing costs; or create new LPAR environments with competitive pricing.
  • New Application Solution: Add new z/OS microservices or applications, priced individually without impacting the cost of other workloads on the same system.
  • Payments Pricing Solution: A single agreed value based price for software plus hardware or just software, via a number of payments processed metric, based on IBM Financial Transaction Manager (FTM) software.

IBM state z14 support for a maximum 2 million Docker containers in an associated maximum 32 TB memory configuration.  In conjunction with other I/O enhancements, IBM state a z14 performance increase of ~300%, when compared with its z13 predecessor.  Historically the IBM Z platform was never envisaged as being the ideal container platform.  However, its ability to seamlessly support z/OS and Linux, while the majority of mission critical Systems Of Record (SOR) data resides on IBM Z platforms, might just be a compelling case for microservices to be processed on the IBM Z platform, minimizing any data latency transfer.

Container Pricing for z/OS is somewhat analogous to the IBM Cloud Managed Services on z Systems pricing model (I.E. CPU consumption based).  Therefore, if monthly R4HA peak processing is driven by an OLTP application, or any other workload for that matter, any additional unused capacity in that specific SCRT reporting month can be allocated for no cost to other workloads.  Therefore z/OS customers will be able to take advantage of this approach, processing collocated microservices or applications for a zero or nominal cost.

County Multiplex Pricing (CMP) Observation: The z14 is the first new generation of IBM Z hardware since the introduction of the CMP pricing mechanism.  When a client first implements a Multiplex, IBM Z server eligibility cannot be older than two generations (I.E. N-2) prior to the most recently available server (I.E. N).  Therefore the General Availability (GA) of z14, classifies the z114 and z196 servers as previously eligible CMP machines.  IBM will provide a 3 Month grace period for CMP transition activities for these N-3 servers, namely z114 and z196.  Quite simply, the first client CMP invoice must be submitted within 90 days of the z14 GA date, namely 13 September 2017, no later than 1 January 2018.

In conclusion, Pervasive Encryption is an omnipresent z14 function integrated into every data lifecycle stage, which could easily be classified as Persuasive Encryption, simplifying the sometimes arduous process of classifying and managing mission-critical data.  As cybersecurity becomes an omnipresent clear and present danger, associated with impending and increasingly punitive compliance mandates such as GDPR, the realm of possibility exists to resolve this high profile corporate challenge once and for all.

Likewise, Container Pricing provides a much needed simple-to-use framework to drive MSU cost optimization for new workloads and could easily be classified as Contained Pricing.  The committed IBM Mainframe customer will upgrade their z13 server environment to z14, as part of their periodic technology refresh approach.  Arguably, those Mainframe customers who have been somewhat hesitant in upgrading from older technology Mainframe servers, might just have a compelling reason to upgrade their environments to z14, safeguarding cybersecurity challenges and evolving processes to contain z/OS MLC costs.

Are You Ready For z Systems Workload Pricing for Cloud (zWPC) for z/OS?

Recently IBM announced the z Systems Workload Pricing for Cloud (zWPC) for z/OS pricing mechanism, which can minimize the impact of new Public Cloud workload transactions on Sub-Capacity license charges.  Such benefits will be delivered where higher Public Cloud workload transaction volumes may cause a spike in machine utilization.  Of course, if this looks familiar and you have that feeling of déjà vu, this is a very similar mechanism to Mobile Workload Pricing (MWP)…

Put simply, zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms, for the usual MLC software suspects, namely z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public Cloud workloads are defined as transactions processed by named Public Cloud applications transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, et al.

As per MWP, SCRT calculates the R4HA for Public Cloud transaction GP MSU resource usage, subtracting 60% of those values from the traditional Sub-Capacity software eligible MSU metric, with LPAR granularity, for each and every reporting hour.  The software program values for the same hour are aggregated for all Sub-Capacity eligible LPARs, deriving an adjusted Sub-Capacity value for each reporting hour.  Therefore SCRT determines the billable MSU peak for a given MLC software program on a CPC using the adjusted MSU values.  As per MWP, this will only be of benefit, if the Public Cloud originated transactions generate a spike in the current R4HA.

One of the major challenges for implementing MWP was identifying those transactions eligible for consideration.  Very quickly IBM identified this challenge and offered a WorkLoad Manager (WLM) based solution, to simplify reporting for all concerned.  This WLM SPE (OA47042), introduced a new transaction level attribute in WLM classification, allowing for identification of mobile transactions and associated processor consumption.  These Reporting Attributes were classified as NONE, MOBILE, CATEGORYA and CATEGORYB.  Obviously IBM made allowances for future workload classifications, hence it would seem Public Cloud will supplement Mobile transactions.

In a previous z/OS Workload Manager (WLM): Balancing Cost & Performance blog post, we considered the merits of WLM for optimizing z/OS software costs, while maintaining optimal performance.  One must draw one’s own conclusions, but there seemed to be a strong case for WLM reporting to be included in the z/OS MLC Cost Manager toolkit.  The introduction of zWPC, being analogous to MWP, where reporting can be simplified with supplied and supported WLM function, indicates that intelligent and proactive WLM reporting makes sense.  Certainly for 3rd party Soft-Capping solutions, the ability to identify MWP and zWPC eligible transactions in real-time, proactively implementing MSU optimization activities seems mandatory.

The Workload X-Ray (WLXR) solution from zIT Consulting delivers this WLM reporting function, seamlessly integrating with their zDynaCap and zPrice Manager MSU optimization solutions.  Of course, there is always the possibility to create your own bespoke reports to extract the relevant information from SMF records and subsystem diagnostic data, for input to the SCRT process.  However, such a home-grown process will only work on a monthly reporting basis and not integrate with any Soft-Capping MSU management, which will ultimately control z/OS MLC costs.

In conclusion, from a big picture viewpoint, in the last 2 years or so, IBM have introduced several new Sub-Capacity pricing mechanisms to help System z Mainframe users optimize z/OS MLC costs, namely Mobile Workload Pricing (MWP), Country Multiplex Pricing (CMP) and now z Systems Workload Pricing for Cloud (zWPC).  In theory, at least one of these new pricing mechanisms should deliver benefit to the committed System z user, deploying this server for strategic and Mission Critical workloads.  With the undoubted strategic importance associated with Analytics, Blockchain, Cloud, DevOps, Mobile, Social, et al, the landscape for System z workloads is rapidly evolving and potentially impacting those sacrosanct legacy Mission Critical workloads.  Seemingly the realm of possibility exists that Cloud and Mobile originated transactions will dominate access to System z Mainframe System Of Record (SOR) data repositories, which generates a requirement to optimize associated MLC costs accordingly.  Of course, for some System z users, such Cloud and Mobile access might not be on today’s to-do list, but inevitably it’s on the horizon, and so why not implement the instrumentation ability ASAP!

z13 WLC Software Pricing Updates: Are You Ready?

Along with the z13 hardware announcement were several very obvious WLC pricing announcements, but more importantly, two hidden Statements Of Direction (SOD) or pre-announcements.

I guess we can all remember the “zSeries Technology Dividend” where put simply, when upgrading zSeries servers, users would benefit from a ~10%+ software price versus performance benefit.  Does anybody still remember the IBM Mainframe Charter from 2003?  That was the document that first referenced this price/performance benefit, which became known as the “technology dividend”.  Specifically, this document stated:

IBM lowered MSU values incorporated in the z990 microcode by approximately 10 percent, resulting in IBM software savings for IBM zSeries software products with MSU-based pricing.  These reduced MSUs do not indicate a change in machine performance. Superior performance and technology within the z990 has allowed IBM to provide improved software prices for key IBM zSeries operating system and middleware software products.

Put really simply, for z990, z9 and z10 server upgrades, IBM delivered this ~10% benefit with faster CPU chips.  Therefore, no noticeable impact on Software Pricing, Capacity Planning or Performance Measurement processes.  However, with the z196/z114, this ~10% benefit could no longer be delivered by CPU chip hardware speed enhancements.  To compensate, IBM introduced the Advanced Workload License Charges (AWLC) pricing regime.  AWLC is an evolution of the Variable (VWLC) pricing regime, lowering per MSU costs for WLC eligible products (E.g. z/OS, CICS, DB2, IMS, WebSphere/MQ, et al).  Hence delivering the ~10% price/performance benefit when upgrading from a z10 to a z196 or z114 (AEWLC) server.

Of course, when upgrading to the zEC12 or zBC12, further refinement of AWLC pricing was required, to deliver this the ~10% price/performance benefit.  Hence, IBM introduced the AWLC Technology Transition Offerings (TTO), lowering AWLC prices for zXC12 and now z13 zSeries servers.

For z13, IBM announced the following z13 AWLC Technology Transition Offerings:

  • Technology Update Pricing for the IBM z13 (TU3): When stand-alone z13 servers are priced with AWLC, or when all the servers in an aggregated Sysplex or Complex are z13 servers priced with AWLC, these servers receive a reduction to AWLC pricing which is called.  Quantity of z13 Full Capacity MSUs for a stand-alone server, or the sum of Full Capacity MSUs in an actively coupled Parallel Sysplex or Loosely Coupled Complex made up entirely of z13 servers.  AWLC discounts range from 4% (4-45 MSU) to 14% (5477+ MSU).
  • AWLC Sysplex Transition Charges (TC2): When two or more machines exist in an aggregated Sysplex or Complex & at z13, zEC12, or zBC12 server & at least one is a z196 or z114 server, with no older technology machines included, they will receive a reduction to AWLC pricing across the aggregated Sysplex or Complex. This reduction provides a portion of the benefit related to the Technology Update Pricing for AWLC (TU1) based upon the proportion of zEC12 or zBC12 server capacity in the Sysplex or Complex.  AWLC discounts range from 0.5% (0-20% z13/zXC12 MSU) to 4.5% (81%-<100% z13/zXC12 MSU).
  • AWLC Sysplex Transition Charges (TC3): When two or more machines exist in an aggregated Sysplex or Complex & at least one is a z13 server & at least one is a zEC12 or zBC12 server, with no older technology machines included, they will receive a reduction to AWLC pricing across the aggregated Sysplex or Complex. This reduction provides a portion of the benefit related to the IBM z13 TU3 offering, based on the total Full Capacity MSU of all z13, zEC12, & zBC12 Machines in the Sysplex or Complex.  AWLC discounts range from 2.8% (4-45 MSU) to 9.8% (5477+ MSU).

These AWLC software pricing announcements are Business As Usual (BAU) and to be expected, but if we dig slightly deeper into the z13 announcements, we will find two other pre-announcements of interest!

Since introducing sub-capacity and WLC pricing regimes, IBM have continually evolved zSeries software sub-capacity pricing mechanisms, with zNALC, AWLC, IWP and more recently MWP offerings.  From a generic viewpoint, with the exception of zNALC, a niche new workload price offering, these pricing announcements did not challenge the “status quo”, where aggregated MSU and large LPAR structures were the ideal.  So why might the upcoming z13 (E.g. Q2 2015) pricing announcements be of note?  Primarily because they challenge the notion of having separate structural entities (I.E. Sysplex Coupled zSeries Servers & LPARS) for existing and new workloads.

Country Multiplex Pricing (CMP): A major evolution, essentially eliminating prior Sysplex pricing rules, requiring that systems be interconnected and/or sharing the same data in order to be eligible for aggregation of MLC software pricing charges.  A Multiplex is defined as the collection of all z Systems within a country.  Therefore, sub-capacity usage will be measured & reported as a single machine, regardless of the connectivity or data sharing configurations.  A new sub-capacity reporting tool is being implemented & clients should expect a transition period as the new pricing model is implemented.  This should allow flexibility to move & run work anywhere, eradicating multiple workload peaks when workloads move between machines.  Ultimately the cost of growth is reduced with one price per product based on MLC capacity growth anywhere in the country.CMP should facilitate for flexible deployment and movement of business workloads between all zSeries Servers located within a country, without impacting MLC billing.  For the avoidance of doubt, this will assist the customer in safeguarding they don’t encounter duplicate MLC peaks as a result of moving an LPAR workload from one zSeries Server to another.  It also removes all Sysplex aggregation considerations, Single Version Charging (SVC) time limits and Cross Systems Waivers (CSW).  Most notably, the cost per MSU for additional capacity will be optimized, being based upon total Multiplex MSU capacity.

IBM Collocated Application Pricing (ICAP): Previously, new applications (zNALC) required a separate LPAR to avoid increases in other MLC software charges.  ICAP facilitates new eligible applications be charged as if they are running in a dedicated environment.  Technically they are integrated with other (non-eligible) workloads.  Software supporting the new application will not impact the charges for other MLC software collocated in the same LPAR.  ICAP appears as an evolution of the Mobile Workload Pricing (MWP) for z/OS pricing mechanism.  ICAP will use an enhanced MWRT, implemented as a z/OS application.  ICAP applies to z13, zXC12, z196/z114 servers.  IBM anticipates that ICAP will deliver zNALC type price benefit, discounting ~50% of ICAP eligible software MSU.

Seemingly IBM have learned from the lessons of IWP, where at first glance, software discounts were attractive, but not at the cost of a separate LPAR.  From a reporting viewpoint, there are similarities to Mobile Workload Pricing for z/OS (MWP), but most notably, pricing is largely zNALC based.  Therefore collocating new workloads in the same LPAR as existing workloads, but with the best price performance of any pricing regime, except zNALC, which is a niche and special edition software pricing metric.

In conclusion, CMP and ICAP are notable WLC pricing regime updates, because they do challenge the status quo of MSU aggregation via Sysplex coupled servers and the ability to collocate new and existing workloads in the same LPAR.  On the one hand, simplified pricing considerations from a granular per MSU cost viewpoint.  However, to optimize price versus performance, arguably the savvy Data Centre will now require a higher level of workload management, safeguarding optimum MSU capacity usage and associated performance.

zPrice Manager is an evolution of the typical soft-capping approach, which can be IBM function based, namely Defined Capacity (DC) or Group Capacity Limit (GCL), or ISV product based.  ISV products typically allow MSU management with dynamic MSU capacity resource management between LPAR, LPAR Group & CPC structures, ideally with Workload Manager (WLM) interaction.  If plug & play simple MSU management is required, these traditional IBM or 3rd party ISV approaches will still work with CMP and ICAP, but will they maximize WLC TCO?

The simple answer is no, because CMP allows the movement of workloads between zSeries Servers.  Therefore if WLC product (I.E. z/OS, CICS, DB2, IMS, WebSphere/MQ) pricing is to be country wide, and optimum WLM performance is to be maintained, a low level granularity of MSU management is required.

zPrice Manager from zIT Consulting allows this level of WLC software product management, with a High Level REXX programmatic interface, and the ability to store real life MSU profile data as callable REXX variables.  Similar benefits apply to ICAP workloads, where different WLM policies might be required for the same WLC product, deployed on the same collocated workload LPAR.  Therefore the savvy data centre will safeguard they optimize MSU TCO via MWP and/or ICAP pricing regimes, without impacting business application performance.

In conclusion, the typical z13 AWLC software pricing updates are Business As Usual (BAU) and can be implemented, as and when required and without consideration.  Conversely, CMP and ICAP can deliver significant future benefit and should be considered in zSeries Server capacity planning forecasts.

Bottom Line Recommendation: Each and every zSeries Server user, whether large or small, should initiate contact with their IBM account teams, for CMP and ICAP briefings, allowing them to consider how they might benefit from these new WLC software pricing regimes.

IFL – A Cost Efficient zSeries Platform?

In September 2000, IBM introduced the Integrated Facility for Linux (IFL) processor, a specialty engine for and some might say dedicated to running the Linux Operating System.  At the time of this announcement, companion software named S/390 Virtual Image Facility for Linux was introduced to assist in the rapid deployment of IFL configurations, especially for non-Mainframe personnel.  However, this product was quickly discontinued, in favour of the standard z/VM Operating System, which is not difficult to learn and can accommodate hundreds if not thousands of zLinux images.

Today, the IFL is still a processor dedicated to Linux workloads on IBM System z servers.  The IFL is supported by z/VM virtualization and the Linux operating system.  The IFL cannot run other IBM operating systems.  The competitively priced IFL processor is a CPU capacity enabler, exclusively for Linux workloads.  Linux deployment (I.E. SUSE & Red Hat) on IFL’s can reduce expenses in the areas of operational efforts, energy, floor space and especially software.

The IFL provides the following functions and benefits:

  • The IBM Enterprise Linux Server is a dedicated System z Linux server, comprised of only IFL processors
  • No additional IBM software charges for traditional (E.g. z/OS, CICS, DB2, WebSphere, et al) environment
  • Performance improvement for Linux workloads with each successive generation of IFL and System z technology
  • Linux workload on the IFL does not result in increased IBM software charges for traditional System z operating systems and middleware
  • Same functionality as a General Purpose processor on a System z server
  • HiperSockets can be used for communication between Linux images, or Linux and other operating system images on the same System z system
  • z/VM virtualization and most IBM Linux middleware products, plus most vendor software products are priced per processor (core) according to the System z IBM International Program License Agreement (IPLA).  IPLA products have a one-time-charge (OTC) and an annual (optional) maintenance charge, called Subscription & Support
  • Supported by the current z/VM virtualization and IBM Wave for z/VM software versions
  • Always a full capacity processor, independent of the capacity of the other processors in the server
  • Orderable as a System z hardware feature. The number of orderable IFL features varies by the server model and configuration
  • Designed to operate asynchronously with other General Purpose processors
  • Managed by PR/SM in logical partition with dedicated or shared processors. The implementation of an IFL requires a Logical Partition (LPAR) definition, where following normal LPAR activation procedure, LPAR defined with an IFL cannot be shared with a general purpose processor.

There will always be the debate as to which processor and associated server type (E.g. x86, POWER, SPARC) is the most cost efficient, but there is no doubt that the ability to accommodate hundreds if not thousands of zLinux instances in one zServer environmental (E.g. Power, Cooling, Floor Space, et al) friendly footprint, with software pricing per core is worthy of consideration.

Adoption for zLinux has been steady and especially in the emerging territories where it’s not unusual for zSeries deployments to be totally zLinux (I.E. IBM Enterprise Linux Server) based.  Moreover, the majority of large and traditional IBM Mainframe users (I.E. z/OS) have installed at least one IFL, if only to evaluate the z/VM and zLinux offering.  Many have deployed the IFL and associated zLinux solution for business requirements.

Therefore, if one of the major cost benefit features of IFL is optimized software costs; can the IFL processor be considered for other workloads, originating from the traditional zSeries (I.E. z/OS) environments?

Proximal Systems Corporation (PSC) is a company with a solution that transparently offloads data processing from IBM Mainframes to Distributed Systems, with an objective of reducing software cost, while maintaining or improving performance.  The company name is derived from the concept of bringing disparate computing systems into close proximity, functionally speaking, providing totally seamless and transparent interoperability.  The result is a unified computing complex within which various tasks can be easily migrated between systems to their most cost efficient operating environment, while still being able to interoperate as if they were all hosted together on the same system.

The PSC Proxy Coupling Technology allows for a CPU orientated task to be offloaded from one system to another by means of an associated proxy task, which has an identical interface as the task to be offloaded, but delegates the majority of the processing to an offloaded task on another system.  The primary objective of this function are for the cost savings and/or performance improvements that might be delivered by migrating tasks to systems that are able to execute those tasks more efficiently.

The fact that the proxy task maintains the same interface as the application being replaced is crucial; as many past Mainframe migration projects have failed due to insurmountable interoperability problems between the Mainframe and Distributed Systems servers (I.E. Windows, Linux, UNIX, et al).  Proxy Coupling Technology offers a solution to this long-standing challenge.  In theory, this allows for the transparent offload of a traditional z/OS workload (E.g. Sort) from General Purpose (GP) processors, to a less expensive (E.g. IFL) alternative…

In the first instance, the Proxy Coupling Technology offloads General Purpose CPU workload associated with the z/OS sort (I.E. CA Sort, DFSORT, Syncsort) function, to another platform (E.g. IFL).  For IFL based implementations, HyperScokets are utilized to transfer data at memory speeds from the z/OS task to zLinux on the IFL, where the sort operation completes, while the resulting z/OS task and associated data are maintained, as per normal.  From an IFL viewpoint, Ahlsort software performs the sort operation, being a sort solution that maintains compatibility with the majority of z/OS sort function (I.E. Control Card Syntax).  Therefore, this is a transparent implementation, where the only consideration is how much CPU capacity is required for the offload function (E.g. IFL, x86).  The benefits are reduced z/OS MSU usage for the sort function, which can be quite significant, as most business data (E.g. Database Offloads, Customer Orientated, et al) is sorted on a daily if not more frequent basis.

Just as IBM introduced the zAAP on zIIP capability, which allowed some customers to more easily justify a specialty engine (I.E. zIIP), combining workloads to exploit the full capability of the specialty engine; in theory the same ethos applies with the Proxy Coupling Technology.  For the avoidance of doubt, workloads that can be processed on an IFL, such as z/OS sort tasks, can assist in delivering higher Return On Investment (ROI) levels for the IFL, for example:

  • Reduced z/OS WLC MSU usage (I.E. Sort function offload) and associated software costs savings
  • IFL processors run at Full Speed and do not add to traditional workload (I.E. z/OS) software costs
  • Utilize any spare IFL CPU resource not used, releasing General Purpose CPU resource for other work

In conclusion, the Proxy Coupling Technology offers a proposition that is similar to the IBM philosophy of reducing z/OS software costs via specialty engines.  Seemingly to date, primarily only the zIIP and zAAP specialty engines were available to optimize CPU usage for z/OS workloads.  Offloading CPU cycles and thus MSU workload to IFL makes sense, utilizing a cost efficient and indeed a full power CPU engine, where for cost reasons, maybe the majority of z/OS customers don’t deploy the “highest” derivative of General Purpose CPU engine available to them.  On the face of it, the realm of possibility exists for other workloads to benefit from z/OS to IFL CPU offload, following sort, which seems to make sense as the first workload to utilize this solution.

Are You Ready For z/OS Mobile Workload Pricing (MWP)?

Recently IBM announced Mobile Workload Pricing (MWP) for z/OS which can minimize the impact of mobile workloads on Sub-Capacity license charges, delivering optimized pricing for System z environments extending their workloads to incorporate mobile devices.

MWP only applies to Mainframe customers deploying a zEC12 or zBC12 in their enterprise, as per the AWLC or AEWLC (AKA Advanced/Entry Workload License Charges) metric; MWP is also extended if a zEC12 or zBC12 enterprise is deploying a z196 or z114 via the AWLC or AEWLC metric.

The primary consideration for MWP is determining how a Mainframe customer can comply with the tracking requirements for mobile workloads.  On the plus side, MWP does not require an isolation of mobile workload transactions in separate LPARs, using enhanced reporting for software pricing.  This is a major step forward when compared with Integrated Workload Pricing (IWP), which ideally requires large LPAR container structures, minimizing costs for WebSphere workloads, applying to the CICS, IMS and WebSphere MLC software products.  Conversely, MWP includes DB2 in the list of eligible software products for cost reduction.

If a Mainframe customer is eligible for MWP pricing they will then need to utilize the Mobile Workload Reporting Tool (MWRT), which is analogous to the original Sub-Capacity Reporting Tool (SCRT).  This is an either/or situation, the Mainframe customer only submits MCRT reports to IBM if they’re MWP eligible, or the status quo remains, where non-MWP Mainframe customers continue to submit SCRT reports.

The Mainframe customer must track and report General Purpose (GP) CPU time for mobile transactions, reporting those values in a pre-defined format to IBM each month to benefit from MWP.  MWRT utilizes reported mobile transaction data to adjust the Rolling 4 Hour Average (R4HA) Sub-Capacity software eligible MSUs, with LPAR granularity.  Optimizing mobile transactions impact for peak LPAR MSU values delivers benefit when higher mobile transaction volumes generate MSU resource usage peaks (Workload Spikes).

MWRT calculates the R4HA for mobile transaction GP MSU resource usage, subtracting 60% of those values from the traditional Sub-Capacity software eligible MSU metric, with LPAR granularity, for each and every reporting hour.  The software program values for the same hour are aggregated for all Sub-Capacity eligible LPARs, deriving an adjusted Sub-Capacity value for each reporting hour.  Therefore MWRT determines the billable MSU peak for a given MLC software program on a CPC using the adjusted MSU values.

Most committed zSeries Mainframe customers will be deploying CICS, DB2 and WebSphere software, while IT trends dictate that mobile device usage (I.E. Smartphone, Tablet, et al) is increasing.  Therefore most z/OS applications that require such mobile access have evolved accordingly over time.  Therefore it seems to be one of those “No Brainer” type scenarios, where the Mainframe user should plan to benefit from MWP, either as they upgrade to the latest zSeries technology, namely zEC12 or zBC12, or immediately if already deploying a zEC12 or zBC12 server.

The only minor consideration is a requirement for the zEC12 or zBC12 customer to engage their local IBM account team, to determine what data they need to report on mobile transactions for MWP consideration.  This one off task will deliver optimized WLC pricing forever more.

Of course IBM are encouraging customers to consider the Mainframe for new applications, driven by mobile transaction requirements.  Equally, there is no reason why longer term Mainframe customers can’t benefit from MWP, benefitting from reduced MLC costs, a major consideration of Mainframe TCO.

z/OS Soft Capping: Balancing Cost & Performance

Historically each and every LPAR was assigned a Relative Weight value; where a more meaningful description would be the initial processing weight. This relative weight value is used to determine which LPAR gains access to resources, where multiple LPARs are competing for the same resource. Being unit-less is one minor challenge of the relative weight value, meaning that it has no explicit CPU capacity or resource value. Typically installations would use a simple multiple of ten metric, most likely 1000, and allocate weights accordingly (E.g. 600=60%, 300=30%, 10=10%, et al). Therefore during periods of resource contention, PR/SM would allocate resources to the requisite LPAR, based upon its relative weight.

Using relative weight to classify all LPARs as equal, at least from a generic class viewpoint, does have some considerations; primarily differentiating between Production and Non-Production workloads. Restricting a workload to its relative weight share of resources is known as Hard Capping. This setting is typically used to restrict Non-Production (E.g. Test) environments to their allocated resource and is also useful for cost control (E.g. Outsourcers), knowing that the LPAR will never consume more than its allocated relative weight allowance.

Hard Capping behaviour changes dependent on the use of the HiperDispatch setting. When HiperDispatch is not chosen, capping is performed at the Logical CP level, where the goal is for each logical CP to receive its relative CP share, based on the relative weight setting. When HiperDispatch is active, vertical as opposed to horizontal CPU management applies. So, a High categorization dictates capping at 100% of the logical CP, whereas a Medium or Low setting allows for resource sharing based on a relative weight per CP basis.

The Intelligent Resource Director (IRD) function provides more advanced relative weight management, automating management of CPU resources and a subset of I/O resources. Workload Manager (WLM) manages physical CPU resource across z/OS images within an LPAR cluster based on service class goals. IRD is implemented as a collaboration between the WLM function and the PR/SM Logical Partitioning (LPAR) hypervisor:

  • Logical CP Management: dynamically allocating logical processors (E.g. Vary On-Line/Off-Line)
  • Relative Weight Management: dynamically redistributing CPU resource as per LPAR weights
  • CHPID Management: dynamically assigning logical channel paths between eligible LPARs

IRD optimizes resource usage, enabling WLM to deliver workload goals.

The use of relative weight in association with Hard Capping and/or IRD/WLM granularity has become somewhat limited for most Mainframe installations with the advent of Sub-Capacity pricing (I.E. MLC via SCRT/R4HA). Primarily because there is no direct correlation to manage CPU resource at a meaningful level, namely the MSU (vis-à-vis CPU MIPS) metric.

Defined Capacity (DC) provides Sub-Capacity CEC pricing by allowing definition of LPAR capacity with a granularity of 1 MSU. In conjunction with the WLM function, the Defined Capacity of an LPAR dictates whether Soft Capping is invoked or not. At this juncture, we should consider how and when WLM measures CPU resource usage and if and when Soft Capping is activated and deactivated:

WLM is responsible for taking MSU utilization samples for each LPAR in 10-second intervals. Every 5 minutes, WLM documents the highest observed MSU sample value from the 10-second interval samples. This process always keeps track of the past 48 updates taken for each LPAR. When the 49th reading is taken, the 1st reading is deleted, and so on. These 48 values continually represent a total of 5 minutes * 48 readings = 240 minutes or the past 4 hours (I.E. R4HA). WLM stores the average of these 48 values in the WLM control block RCT.RCTLACS. Each time RMF (or BMC CMF equivalent) creates a Type 70 record, the SMF70LAC field represents the average of all 48 MSU values for the respective LPAR a particular Type 70 record represents. Hence, we have the “Rolling 4 Hour Average”. RMF gets the value populated in SMF70LAC from RCT.RCTLACS at the time the record is created.

SCRT also uses the Type 70 field SMF70WLA to ensure that the values recorded in SMF70LAC do not exceed the maximum available MSU capacity assigned to an LPAR. If this ever happens (due to Soft Capping or otherwise) SCRT uses the value in SMF70WLA instead of SMF70LAC. Values in SMF70WLA represent the total capacity available to the LPAR.

We should also consider the two possibilities for MLC software payment (I.E. SCRT) based upon MSU resource usage. Quite simply, the MSU value passed for SCRT invoice consideration is the R4HA or the Defined Capacity, whichever is the lowest. Put another way; if the R4HA exceeds Defined Capacity, Soft Capping applies to the LPAR.

The primary disadvantage of Soft Capping is that the Defined Capacity setting is somewhat static; it is manually defined once, maybe several times a day for workloads with distinct characteristics (E.g. On-Line, Batch, et al), but dynamic DC management based upon inter-related LPAR behaviour is at best, evolving. The primary considerations for Soft Capping are:

  • An LPAR can only be managed via Soft Capping or Hard Capping; not both
  • DC rules only applies to General Purpose CP’s (Hard Capping for Specialty Engines is allowed)
  • An LPAR must be defined with shared CP’s (dedicated CP’s not allowed)
  • All LPAR Sub-Capacity eligible products have the same MSU capacity (I.E. DC)

Soft Capping is relatively simple to implement and typically generates MLC software costs savings, with minimal impact.

Group Capacity Limit (GCL) provides an extension to the Defined Capacity (DC) Soft Capping function. GCL allows an MSU limit for total usage of all group LPARs, with a granularity of 1 MSU. The primary considerations for GCL are:

  • Works with DC LPAR capacity settings
  • Target share does not exceed DC
  • Works with IRD
  • Multiple CEC groups allowed; but an LPAR may only be defined to one group
    An LPAR must be defined with shared CP’s, with WAIT COMPLETION = NO specification

It is possible to combine IRD weight management with the GCL function. Based on installation policy, IRD can modify the relative weight setting to redistribute capacity resource within an LPAR cluster.

However, IRD weight management is suspended when GCL is in effect, because LPAR resource entitlement within a capacity group can be (I.E. Pre zxC12) derived from the current weight. Hence the LPAR might get allocated an unacceptable low weight setting, generating a low GCL entitlement.

GCL also allows for MSU to be shared between LPARs in a group, where one LPAR would be a donator and another would be a receiver. Therefore the customer classifies their LPARs accordingly and when a high-priority LPAR requires additional MSU resource, it will be allocated from a lower priority LPAR, if available. This provides a modicum of flexibility, but by definition, peak workloads are not predictable and typically require a significantly higher amount of MSU for a short time period. Typically this requirement will not be satisfied with the GCL function.

Soft Capping techniques, either at the individual (DC) or group (GCL) level deliver cost saving benefit, but a fine granularity of management is required to balance cost saving versus associated performance considerations. The primary challenges associated with Soft Capping are its interactions with workload characteristics and an inability to dynamically manage MSU allocation, in-line with the R4HA. Put another way, the R4HA is derived from 48*5 Minute samples, whereas DC and GCL settings are typically defined on an infrequent (E.g. Monthly or longer) basis.

As z/OS evolves, further in-built function is available to manage MSU capacity. zSeries Capacity Provisioning Manager (CPM) is designed to simplify the management of temporary capacity, defined capacity and group capacity. The scope of z/OS Capacity Provisioning is to address capacity requirements for relatively short term workload fluctuations for which On/Off Capacity on Demand or Soft Capping changes are applicable. CPM is not a replacement for the customer derived Capacity Management process. Capacity Provisioning should not be used for providing additional capacity to systems that have Hard Capping (initial capping or absolute capping) defined.

With the introduction of z/OS 2.1, CPM functionality incorporates Soft Capping support via the DC and GCL functions. CPM functions from a set of installation defined policies and parameters, where the CPM server receives three types of input:

  • Domain Configuration: defines the CPCs and z/OS systems to be managed
  • Policy: contains the information as to which work is eligible, for which conditions and during which timeframes and capacity increases for constrained workloads
  • Parameter: contains environment descriptors (E.g. UNIX Environment, Installation Options, et al)

From a customer viewpoint, policy definition allows them to define the provision of CPU resource:

  • Date & Time: When capacity provisioning is allowed
  • Workload: Which service class qualifies for provisioning?
  • CPU Resource: How much additional MSU capacity can be allocated?

CPM provides more function when compared with Defined Capacity and Group Capacity Limit Soft Capping techniques. Therefore allowing for time schedules to be defined, workloads to be categorized and MSU resource to be allocated in a dynamic and granular manner.

A modicum of complexity exists when considering the arguably most important factor for CPM policy definition, namely the Performance Index (PI):

  • Activation: PI of service class periods must exceed the activation threshold for a specified duration, before the work is considered as eligible.
  • Deactivation: PI of service class periods must fall below the deactivation threshold for a specified duration, before the work is considered as ineligible.
  • Null: If no workload condition is specified a scheduled activation/deactivation is performed; with full capacity as specified in the rule scope, unconditionally at the start and end times of the time condition.

For workload based provisioning it is a necessary condition that the current system Performance Index exceeds the specified customer policy PI metric. One must draw one’s own conclusions regarding PI criteria settings, but to date, they’re largely based on arguably complex mathematical formulae, which perhaps is not practicable, especially from a simple management viewpoint.

With the requisite hardware (I.E. zxC12+) and Operating System levels (I.E. z/OS 1.13+), CPM provides extra functionality for the customer to implement granular Soft Capping techniques to balance cost and performance. When compared with Defined Capacity and Group Capacity Limit techniques, CPM delivers increased granularity for managing capacity dynamically, based on customer derived policies, recognizing time slots, workloads and MSU resource increases accordingly.

From a big picture viewpoint, without doubt, we must recognize the fundamental role that WLM plays in Soft Capping. Quite simply, the 48*5 Minute MSU resource samples dictate whether a workload will be eligible for Soft Capping or not and from a cumulative viewpoint, these MSU samples dictate the R4HA metric. Based on this observation, efficient and functional Soft Capping must be workload based (I.E. WLM Service Class), be dynamic and operational on a 24*7 basis, because workload peaks are never predictable, while balancing MSU resource accordingly. Of course, simplicity of implementation and management, supplemented by meaningful reporting is mandatory.

Once again, observing the 48*5 Minute MSU resource samples from a R4HA viewpoint, if a workload was to increase MSU usage by an average of 50% for 1 Hour (I.E. 12 Samples), and decrease MSU usage by an average of 20% for 2.5 Hours (I.E. 30 Samples), from an average viewpoint, the R4HA has remained static. Therefore an optimum Soft Capping technique needs to recognize WLM service class requirements, reacting in a timely manner, increasing and decreasing MSU usage, to safeguard workload performance for Time Critical workloads, while optimizing SCRT MLC cost.

zDynaCap delivers automated capacity balancing within CPCs, Capacity Groups or Groups of LPARs. Central to zDynaCap are the predefined balancing policies. Within these balancing policies, users define their MSU ranges of Groups and LPARs and also the priorities of the associated LPAR Workload. zDynaCap continually monitors overall usage and compares this to the available capacity and the user defined MSU balancing policies. For example, should a high priority workload on one LPAR not get enough capacity, while a low priority workload on another within the group gets too much capacity, available MSU capacity is distributed according to customer derived balancing policies. Only if there is no leftover capacity to be rescheduled within the defined Group, and if the high or medium priority workload will be slowed down, will zDynaCap add MSU.

With zDynaCap Capacity Balancing, available MSU capacity is balanced within LPAR groups, safeguarding that during peak time the mission critical workload is processed as per business expectations (E.g. SLA/KPI) for the lowest possible MLC cost.

In conclusion, given the significance of IBM MLC software (E.g. z/OS, CICS, DB2, IMS, WebSphere MQ, et al) costs, arguably every Mainframe environment should deploy a capping technique for cost optimization. Hard Capping might work for some, but in all likelihood, Soft Capping is the primary choice for most Mainframe environments. For sure, IBM have delivered several Soft Capping techniques, with varying levels of function and granularity, namely Defined Capacity, Group Capacity Limit (GCL) and the zSeries Capacity Provisioning Manager (CPM). It was forever thus and the ISV community exists because they specialize, architect and deliver specialized solutions and zDynaCap is such a solution, recognizing the fundamental rules of IBM Mainframe Soft Capping, namely the underlying WLM and R4HA foundation.

Mainframe ISV Software: Is Continuous Product Improvement Always Evident?

Ken Venturi once said “I don’t believe you have to be better than everybody else.  I believe you have to be better than you ever thought you could be”.

Wouldn’t it be great if every CTO and/or Product manager had this same philosophy for their Mainframe software solution?  One such example I have experienced over the years is (E)JES from Phoenix Software International (PSI).  Of course it’s really important to have Day 1 support for the latest release of Operating System, z/OS 2.1 being the latest example, but what about actually exploiting the latest functionality available with the latest zSeries Mainframe Enterprise Servers and z/OS Operating Systems?

To drive maximum bang from you’re your buck, optimal performance and robust cost optimization can only be possible by recognizing and exploiting the latest Mainframe function ASAP, as and when appropriate.  Furthermore, listening to your customers, analysing their feedback, actively participating in User Organizations such as SHARE, and so on, will all help in continuous product development and innovation.

Here are some of the reasons why (E)JES has succeeded over a 30+ year period, recognizing and exploiting new z/OS function, as and when the updated z/OS is released for General Availability (GA).  Even today, with Version 5.3 supporting z/OS 2.1 as of day 1, (E)JES continues to offer value-added function for the seasoned, inexperienced and in fact, all IBM Mainframe technicians:

  • 64-bit performance optimizations (I.E. MEMLIMIT: above-the-bar) for both (E)JES client and server components, safeguarding minimal z/OS resource usage.
  • Nearly all (E)JES JES subsystem processing routines are eligible for zIIP redirection, delivering software cost savings for all (E)JES users.  Sub-Capacity System z processor users experience improved (E)JES performance because zIIP engines always run at full speed.  This behaviour differs from that of General Purpose CPs, “throttled” with Sub-Capacity deployments.
  • (E)JES code executes faster via its inbuilt High Performance Routine (HPR) facility, specifically developed to make (E)JES code execute faster while accessing data in JES control blocks.  HPRs have a shorter instruction path length than previous coding techniques, avoiding delays in modern z Series CPU instruction pipelines.
  • If High Performance FICON (zHPF) is available, (E)JES uses Transport Mode channel programs for JES Spool I/O.  When zHPF is not available, or when a CAS server performs I/O against the global data set, (E)JES uses the highest-performing Command Mode channel programs currently available.  These channel programs perform I/O significantly faster than “ordinary” channel programs.
  • The use of 24-bit (captured) UCBs puts a strain on the 24-bit virtual storage resource.  The use of ordinary (non-extended) TIOT entries puts a limit on the total number of allocations that can exist simultaneously in an address space.  (E)JES supports and uses 31-bit (uncaptured) UCBs and the extended TIOT (XTIOT) function (I.E. NON_VSAM_XTIOT=YES in DEVSUPxx PARMLIB)
  • (E)JES supports placement of JES spool data sets in the cylinder-managed area of an Extended Address Volume (EAV).  Of course, as of z/OS 1.12, EAV increases 3390 DASD capacity to ~1 TB.
  • (E)JES Pattern Utility Matching uses the SRST hardware instruction.  Empirical measurements show this technique is far faster on modern System z processors than alternatives such as the TRT instruction or “brute force” matching techniques using CLI/CLC.

One of the primary benefits of upgrading IBM z/OS software is the overall system performance benefit and associated cost reduction, but of course, IBM can only deliver the function and ability, while it’s incumbent upon the ISV community to upgrade their software products accordingly.  A key goal for any good ISV software product is to try to provide a value-add in the area of performance.  This has been one of the primary areas of focus for (E)JES since its introduction in 1978. 

Most spool display and management products tend to rely on the most resource-intensive interface available, namely the JES subsystem provided SSI 80.  (E)JES benchmarking tests against the most readily-available JES SSI 80 exploiters demonstrates significant CPU savings when deploying (E)JES.

Software products also need to deliver continuous improvements with regard to usability, presentation and in-built function, increasing user and system administrator productivity.  Without doubt, optimization encompasses not just hardware, but software, services, systems management disciplines and “best practices” that tie it all together.  Here are some of the usability enhancements that (E)JES has incorporated:

  • ISPF users running a 3270 emulator on a programmable workstation can now search IBM Eclipse-based InfoCenters via (E)JES.  Although (E)JES fully supports BookManager format documentation, BookManager READ/MVS is now obsolete, beginning with z/OS 2.1, BookManager softcopy books are no longer delivered by IBM.  IBM has stated that InfoCenters, and eventually KnowledgeCenters, are their strategic direction for online documentation.
  • (E)JES Web is a new, browser-based interface to (E)JES.  The associated RESTful API delivering this web enabled technology provides a framework for the creation of Eclipse plug-ins, mobile applications, and other web services clients.  This facility will provide a “rapid learning” type facility for users (E)JES users, both new and old that might be uncomfortable navigating traditional 3270 interfaces.
  • (E)JES provides a Java Application Programming Interface (API), complementing other in-built APIs for REXX and procedural languages.  By using an (E)JES API, a user can harness the versatility of their preferred programming language to interface and interact with (E)JES.  This support provides an interface to deliver nearly all of the capabilities available to an interactive (E)JES user.
  • (E)JES incorporates context sensitive help function, with point-and-shoot/pop-up dialogs, helping educate users on (E)JES, JES and z/OS while they work.  Users can get pop-up explanations of columns, input choices for unprotected fields, and a list of line commands.  Smart pop-ups explain the contents of certain columns, such as system abend codes.

The latest (E)JES Release Information Manual eloquently details the product enhancements over the last 5 releases or so, providing a good Product Roadmap reference point.

So, whether the ISV software product you deploy has been available for several years or several decades, do you safeguard maximum business benefit for optimal cost by considering:

  • Does the ISV deploy the latest zSeries server (I.E. zBC12, zEC12) for software interoperability and full hardware function exploitation; or an emulation (I.E. zPDT) technique?
  • Does the ISV deliver value-added z/OS related function on Day 1 or even within a year of the latest z/OS release?
  • Does the ISV deliver meaningful function to assist your users deploy said function, while simplifying environment management for system administrators?
  • Does your ISV product optimize cost, with Sub-Capacity pricing in MSU increments, aggregated MSU costs for your entire zSeries Mainframe environment, as opposed to specific workloads (E.g. CPC’s, LPAR’s, et al)?
  • Does your ISV product optimize cost by offloading the majority of its CPU function to zIIP specialty engines, which run at maximum speed, and where software “runs for free”?

Of course, only you can ask and potentially answer these questions during your day-to-day activities of maintaining currency and optimal performance for your Mainframe software portfolio.

Sometimes the hardest questions anybody can ask are the questions they ask themselves, which are never rhetorical questions!  Extracted verbatim from the latest (E)JES Release Information Manual:

Team (E)JES took advantage of the Phoenix Software International zHISR performance analysis product to discover performance “hot spots” in  the (E)JES product.  Sometimes the simplest, least conspicuous piece of code turns out to be a major CPU contributor.  See below for some of the most embarrassing “surprise” hot spots we discovered using zHISR in a z/OS 2.1 LPAR:

  • Over 30% of the CPU used during a Spool Data Browse FIND operation, against a multi-million-line SYSOUT in JES2, turned out to be code that was clearing a record buffer to blanks using MVCL.  This clearing code was eliminated and some minor adjustments were made in other code to compensate for this change.
  • 27% of the CPU used to produce the Activity display in JES2 turned out to be in a routine that manages an internal resource called the “Job Positions Table.”  The algorithm was improved (to work more like its JES3 counterpart) and that routine is no longer a significant CPU contributor.
  • 9% of (E)JES session start-up was a 26-year-old “brute force” prime number generator used to compute the size of a hash table.  That code was totally reworked and now accounts for approximately .02% of session start-up CPU.
  • A 6% performance penalty was observed when sorting a tabular display with a moderate number of rows. The hot spot turned out to be the code that cleared the work area for the sort service to zeros (another MVCL). This overhead was reduced to .04%.

Mea culpa and humility, never a bad thing, but you have to be honest with yourself and ask yourself the right questions!  So going back full circle and quoting Ken Venturi once again, “I don’t believe you have to be better than everybody else.  I believe you have to be better than you ever thought you could be”.  You must draw your own conclusions as to whether such an observation applies to the (E)JES team at Phoenix Software International (PSI)…

Why not ask them yourself?  Ed Jaffe, the (E)JES CTO will be available at the forthcoming UK GSE Annual Conference, 5-6 November 2013, speaking about (E)JES System Management Software: More With Less For Less, For The z/OS Mainframe and z/OS 2.1 User Experiences.