Optimize Your System z ROI with z Operational Insights (zOI)

Hopefully all System z users are aware of the Monthly Licence Charge (MLC) pricing mechanisms, where a recurring charge applies each month.  This charge includes product usage rights and IBM product support.  If only it was that simple!  We then encounter the “Alphabet Soup” of acronyms, related to the various and arguably too numerous MLC pricing mechanism options.  Some might say that 13 is an unlucky number and in this case, a System z pricing specialist would need to know and understand each of the 13 pricing mechanisms in depth, safeguarding the lowest software pricing for their organization!  Perhaps we could apply the unlucky word to such a resource.  In alphabetical order, the 13 MLC pricing options are AWLC, AEWLC, CMLC, EWLC, MWLC, MzNALC, PSLC, SALC, S/390 Usage Pricing, ULC, WLC, zELC and zNALC!  These mechanisms are commercial considerations, but what about the technical perspective?

Of course, System z Mainframe CPU resource usage is measured in MSU metrics, where the usage of Sub-Capacity allows System z Mainframe users to submit SCRT reports, incorporating Monthly License Charges (MLC) and IPLA software maintenance, namely Subscription and Support (S&S).  We then must consider the Rolling 4-Hour Average (R4HA) and how best to optimize MSU accordingly.  At this juncture, we then need to consider how we measure the R4HA itself, in terms of performance tuning, so we can minimize the R4HA MSU usage, to optimize cost, without impacting Production if not overall system performance.

Finally, we then have to consider that WLC has a ~17-year longevity, having been announced in October 2000 and in that time IBM have also introduced hardware features to assist in MSU optimization.  These hardware features include zIIP, zAAP, IFL, while there are other influencing factors, such as HyperDispatch, WLM, Relative Nest Intensity (RNI), naming but a few!  The Alphabet Soup continues…

In summary, since the introduction of WLC in Q4 2000, the challenge for the System z user is significant.  They must collect the requisite instrumentation data, perform predictive modelling and fully comprehend the impact of the current 13 MLC pricing mechanisms and their interaction with the ever-evolving System z CPU chip!  In the absence of such a simple to use reporting capability from IBM, there are a plethora of 3rd party ISV solutions, which generally are overly complex and require numerous products, more often than not, from several ISV’s.  These software solutions process the instrumentation data, generating the requisite metrics that allows an informed decision making process.

Bottom Line: This is way too complex and are there any Green Shoots of an alternative option?  Are there any easy-to-use data analytics based options for reducing MSU usage and optimizing CPU resources, which can then be incorporated into any WLC/MLC pricing considerations?

In February 2016 IBM launched their z Operational Insights (zOI) offering, as a new open beta cloud-based service that analyses your System z monitoring data.  The zOI objective is to simplify the identification of System z inefficiencies, while identifying savings options with associated implementation recommendations. At this juncture, zOI still has a free edition available, but as of September 2016, it also has a full paid version with additional functionality.

Currently zOI is limited to the CICS subsystem, incorporating the following functions:

  • CICS Abend Analysis Report: Highlights the top 10 types of abend and the top 10 most abend transactions for your CICS workload from a frequency viewpoint. The resulting output classifies which CICS transactions might abend and as a consequence, waste processor time.  Of course, the System z Mainframe user will have to fix the underlying reason for the CICS abend!
  • CICS Java Offload Report: Highlights any transaction processing workload eligible for IBM z Systems Integrated Information Processor (zIIP) offload. The resulting output delivers three categories for consideration.  #1; % of existing workload that is eligible for offload, but ran on a General Purpose CP.  #2; % of workload being offloaded to zIIP.  #3; % of workload that cannot be transferred to a zIIP.
  • CICS Threadsafe Report: Highlights threadsafe eligible CICS transactions, calculating the switch count from the CICS Quasi Reentrant Task Control Block (QR TCB) per transaction and associated CPU cost. The resulting output identifies potential CPU savings by making programs threadsafe, with the associated CICS subsystem changes.
  • CICS Region CPU Constraint: Highlights CPU constrained regions. CPU constrained CICS regions have reduced performance, lower throughput and slower transaction response, impacting business performance (I.E. SLA, KPI).  From a high-level viewpoint, the resulting output classifies CICS Region performance to identify whether they’re LPAR or QR constrained, while suggesting possible remedial actions.

Clearly the potential of zOI is encouraging, being an easy-to-use solution that analyses instrumentation data, classifies the best options from a quick win basis, while providing recommendations for implementation.  Having been a recent user of this new technology myself, I would encourage each and every System z Mainframe user to try this no risk IBM z Operational Insights (zOI) software offering.

The evolution for all System z performance analysis software solutions is to build on the comprehensive analysis solutions that have evolved in the last ~20+ years, while incorporating intelligent analytics, to classify data in terms of “Biggest Impact”, identifying “Potential Savings”, evolving MIPS measurement, to BIPS (Biggest Impact Potential Savings) improvements!

IBM have also introduced a framework of IT Operations Analytics Solutions for z Systems.  This suite of interconnected products includes zOI, IBM Operations Analytics for z Systems, IBM Common Data Provider for z/OS and IBM Advanced Workload Analysis Reporter (IBM zAware).  Of course, if we lived in a perfect world, without a ~20 year MLC and WLC longevity, this might be the foundation for all of our System z CPU resource usage analysis.  Clearly this is not the case for the majority of System z Mainframe customers, but zOI does offer something different, with zero impact, both from a system impact and existing software interoperability viewpoint.

Bottom Line: Optimize Your System z ROI via zOI, Evolving From MIPS Measurement to BIPS Improvements!

z/VM: The Most Flexible System z Operating System?

When considering IBM System z Operating Systems, typically z/OS is considered to be the flagship product, delivering best-of-breed features, including but not limited to, performance, reliability, availability, security, capacity, et al.  Therefore it easy to overlook the flexible virtualization capabilities of z/VM, delivering the architectural foundation for the increasingly attractive LinuxONE offering.  Quite simply, the fundamental strength of z/VM is an ability for hundreds if not thousands of virtual machines to share system resources with high levels of resource utilization.  The recent release of z/VM V6.4 provides even greater levels of scalability, security, resource optimization and efficiency to create opportunities for cost savings, while providing a robust foundation for cloud computing on z Systems servers.

Major technical highlights of z/VM 6.4 include:

  • Simultaneous MultiThreading (SMT) technology extends per-processor, core capacity growth beyond single-thread performance for Linux on z Systems running on an IBM Integrated Facility for Linux (IFL) specialty engine on a z13, z13s or LinuxONE server.
  • Enhanced Real & Guest Virtual Memory Support. The maximum amount of real storage supported by z/VM increases from 1 to 2 TB, whereas maximum supported virtual memory for a single guest remains at 1 TB.  Maintaining the virtual to real memory allocation, doubling the real memory used, results in doubling the active virtual memory that can be used effectively.  This virtual memory can be sourced from an increased number of virtual machines and/or larger virtual machines, delivering greater leverage of white space.
  • Surplus CPU Power Distribution Improvement. Virtual machines not utilizing all of their entitled CPU power, determined by their share setting, generate “surplus CPU power.”  This surplus CPU resource can be distributed to other virtual machines in proportion to their share settings, managed independently across virtual machines for each processor type, namely General Purpose (GP), zIIP, IFL, et al.
  • Guest Large Page Support. z/VM 6.4 now includes support for the Enhanced Dynamic Address Translation (DAT), allowing a guest machine to exploit large (1 MB) pages.  Larger page sizes decrease the amount of guest memory needed for DAT tables, therefore decreasing the overhead required to perform address translation.  In all cases, guest memory is mapped into 4 KB pages at the host level.

From a Linux environment viewpoint, z/VM V6.4 is a supported environment using IBM Dynamic Partition Manager for Linux-only systems with SCSI storage.  This simplifies system administration tasks for a more positive experience by those with limited System z Mainframe administration skills.  IBM Wave Version 1 Release 2 is now included in z/VM V6.4 as a priced feature, simplifying the task of administering a z/VM environment.  Using Dynamic Partition Manager, an inexperienced z/VM technician can create a z/VM partition in ~10 Minutes!

Supporting today’s agile application development and hybrid cloud implementations, z/VM and LinuxONE virtual servers can be natively managed using OpenStack open cloud architecture-based interfaces IBM OpenStack for z Systems.  OpenStack is an Infrastructure as-a Service (IaaS) cloud computing open source project, managed by the OpenStack Foundation.  With the adoption of OpenStack as part of the IBM cloud strategy, z/VM drivers provide OpenStack enablement for z/VM virtual machines running Linux on z Systems and LinuxONE.  Open standards such as OpenStack enable enterprises to be more agile, resolving potential issues such as vendor lock-in, technical expert recruitment, long application development cycles and security challenges.

The next evolution of z/VM cloud enablement technology is the OpenStack Liberty based Cloud Management Appliance (CMA), available for z/VM 6.3 and 6.4.  z/VM installations wanting to deploy cloud based solutions beyond Cloud Manager with OpenStack for z Systems, should utilize the cloud enablement support provided by the z/VM OpenStack Liberty based CMA.  This OpenStack Liberty based Cloud Management Appliance (CMA) replaces the IBM Cloud Manager with OpenStack for System z solution, withdrawn from marketing in June 2016.

The z/VM hypervisor extends the capabilities of z Systems and LinuxONE environments from the standpoint of sharing hardware assets, virtualization facilities and communication resources.  In conjunction with IBM Wave, z/VM makes it easier to derive maximum value from largescale virtual server hosting on z Systems and LinuxONE.  These benefits includes software and personnel savings, operational efficiency, power savings and optimal qualities of service.  The z/VM virtualization technology is designed to enable organizations to run hundreds to thousands of Linux servers on a single System z Mainframe footprint, alongside other System z Operating Systems, such as z/OS, z/VSE, or as a large-scale enterprise LinuxONE server solution.

Advanced virtualization features like multisystem virtualization and live guest relocation with z Systems, LinuxONE, z/VM, and Linux on z Systems or LinuxONE help to provide an efficient infrastructure for deploying private clouds to support workloads that scale both horizontally and vertically at a low total cost of ownership.

Although some might consider z/OS to be the flagship IBM system z Mainframe Operating System, arguably z/VM is the industry standard for optimal resource virtualization for numerous Operating System deployments.

z13: A Digital Business Ready Solution?

As per the usual next generation zSeries Server release, IBM announced their latest evolution on 13 January 2015, namely the z13. IBM describe this platform as the most powerful and secure system ever built:

  • First system able to process 2.5 billion transactions per day, built for mobile economy
  • Makes possible real-time encryption on all mobile transactions at scale
  • First mainframe system with embedded analytics providing real time transaction insights 17X faster than compared competitive systems at a fraction of the cost

At first glance, feeds and speeds generally don’t enthuse the audience, but if we dig deeper and acknowledge other recent IBM developments incorporating Apple, Twitter and Data Analytics announcements, we perhaps can draw some better business-facing conclusions. IBM have a clearly defined Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative, seemingly based upon the IDC 3rd platform defined as Social, Mobile, Analytics & Cloud (SMAC).

Industry analysts predict that in the next 3 years and by 2017, SMAC (CAMSS) expenditure will account for 25%+ of total enterprise software market revenue, doubling from ~12% in 2012. In simple terms, this new expenditure opportunity represents $100+ Billion revenue. We can imagine that all major ISV’s will be wanting their share of this market…

Whichever classification you choose, IBM CAMSS or IDC SMAC, IT infrastructures and associated investment currently are and certainly will be heavily influenced by this new world computing paradigm. Like it or not, an ability to perform a transaction anywhere (Mobile), keeping everything simple and networked (Social Media), real time prediction of future customer requirements (Analytics), available anywhere (Mobile), for an alleged fraction of the cost (Cloud), makes sense for the 21st Century business. Ignore this new technology evolution at your peril as it will impact each and every area of the IT enterprise and associated resources, primarily software and supporting hardware.

Did you notice the difference between the IBM classification and IDC? IDC have not considered Security to be a consideration factor worthy of acronym (SMAC) inclusion. In today’s world of cybersecurity, that might be somewhat of an oversight, but we must assume that IDC consider cybersecurity to be a consideration for all of the Analytics, Cloud, Mobile & Social aspects, which of course it is!

If we consider the relative merits of technology platforms from a security viewpoint, the IBM z13 delivers EAL5+ security certification, whereas other non-Mainframe platforms can only currently claim EAL4+ certification.

It is estimated that 55%+ of enterprise (mission critical) transactions are processed by the IBM Mainframe, but this is based on pre mobile workloads. It therefore makes commercial sense for IBM to safeguard their flagship platform not only maintains the existing IBM Mainframe customer base, but captures new and mobile centric workloads.

Having considered the business requirements for today’s IT business, let’s now classify the new features of the z13 platform:

  • Up to 40% more total system capacity compared to the zEC12.
  • Up to 10 terabytes (TB) of available Redundant Array of Independent Memory (RAIM) real memory per server.
  • Cryptographic performance improvements with new Crypto Express5S.
  • Economies of scale with simultaneous multithreading delivering more throughput for Linux and zIIP-eligible workloads.
  • Improved performance of complex mathematical models, perfect for analytics processing, with Single Instruction Multiple Data (SIMD).
  • IBM zAware cutting-edge pattern recognition analytics for fast insight into system health extended to Linux on z Systems.
  • A reduction in elapsed time for I/O-bound batch jobs with new FICON Express16S versus FICON Express8S.
  • Support for larger memory configurations planned to be supported on z/OS systems, which can be used to improve transaction response times, lower CPU costs, simplify capacity planning and ease deploying memory-intensive workloads. (The IBM z13 offers up to 10 TB memory.)
  • I/O service time improvement when writing data remotely using the new zHPF Extended Distance II.
  • Support for up to 256 coupling CHPIDs, which provides enhanced connectivity and scalability for a growing number of coupling channel types.
  • IBM Integrated Coupling Adapter (ICA SR), which offers greater short reach coupling connectivity than existing link technologies and enables greater overall coupling connectivity per IBM z13 than prior server generations.
  • Capability to extend z/OS workload management policies into the SAN fabric.
  • New rack-mounted Hardware Management Console (HMC), helping to save space in the data center.
  • Non-raised floor option, offering flexible possibilities for the data center.
  • Optional water cooling, providing the ability to cool systems with user-chilled water.
  • Optional high-voltage dc power, which can help IBM z Systems clients save on their power bills.
  • Optional top exit power and I/O cabling designed to provide increased flexibility.
  • New IBM z BladeCenter Extension (zBX) Model 004 in support of heterogeneous resources managed by IBM z Unified Resource Manager.

As we all know, Moore’s Law had to end sometime soon and this is true for System z CPU chips. The zEC12 CPU was often claimed to be the fastest commercial processor, with a 32nm core and a 5.5 GHz rating. The z13 chip runs a 22 nm core at a 5 GHz, at first glance ~10% slower than the zEC12. The new z13 chip delivers a ~10% performance increase, due to advances in core design, with better branch prediction and pipelining in the core. Noteworthy, is the slightly slower clock speed of the z13 chip, reducing heat output, probably signifying that ~5 GHz is the ceiling for CPU chips in the near future.

However, for z13, the doubling of performance still apples for many other resources:

  • Cryptographic coprocessors performance (~2*)
  • Channel speed (~2*)
  • I/O bandwidth (~2*)
  • Memory/Cache performance (~2*)
  • Memory capacity (~3*)

Once again, classifying these technological advances in terms of mobile business, the z13 delivers real-time encryption of mobile transactions, protecting transaction data, delivering consistent response times for a quality customer experience. Overall, IBM claims the z13 delivers a potential for ~36% better response time, ~61% better throughput and ~17% lower cost per mobile transaction.

A major and subtle change introduced with the z13 is Simultaneous MultiThreading (SMT). SMT allows 2 active instruction streams per core, each dynamically sharing the core’s execution resources. SMT will be available in IBM z13 for workloads running on the Integrated Facility for Linux (IFL) and the IBM z Integrated Information Processor (zIIP).

Each software Operating System/Hypervisor has the ability to intelligently drive SMT in a way that is best for its unique requirements. z/OS SMT management consistently drives the cores to high thread density, in an effort to reduce SMT variability and deliver repeatable performance across varying CPU utilization, thus providing more predictable SMT capacity. z/VM SMT management optimizes throughput by spreading a workload over the available cores until it demands the additional SMT capacity.

From a capacity planning and performance measurement viewpoint, just a slight note of caution. Although the z13 CPU chip delivers increased CPU capacity, the raw speed is slower and there are considerations for SMT. A former IBM staffer, Bob Rogers has written a great article on this SMT subject matter, which should be on your reading list!

In conclusion, the z13 announcement is another step forward for zSeries Mainframes. If you consider this announcement as just another next generation zSeries Mainframe announcement, you’re not treating your business or yourself with the respect they deserve. Instead, please consider this z13 announcement as an evolution from an enterprise solution delivery viewpoint. Primarily, consider the 21st century business keywords, in no particular order, of Analytics, Cloud, Mobile, Social & Security.

IFL – A Cost Efficient zSeries Platform?

In September 2000, IBM introduced the Integrated Facility for Linux (IFL) processor, a specialty engine for and some might say dedicated to running the Linux Operating System.  At the time of this announcement, companion software named S/390 Virtual Image Facility for Linux was introduced to assist in the rapid deployment of IFL configurations, especially for non-Mainframe personnel.  However, this product was quickly discontinued, in favour of the standard z/VM Operating System, which is not difficult to learn and can accommodate hundreds if not thousands of zLinux images.

Today, the IFL is still a processor dedicated to Linux workloads on IBM System z servers.  The IFL is supported by z/VM virtualization and the Linux operating system.  The IFL cannot run other IBM operating systems.  The competitively priced IFL processor is a CPU capacity enabler, exclusively for Linux workloads.  Linux deployment (I.E. SUSE & Red Hat) on IFL’s can reduce expenses in the areas of operational efforts, energy, floor space and especially software.

The IFL provides the following functions and benefits:

  • The IBM Enterprise Linux Server is a dedicated System z Linux server, comprised of only IFL processors
  • No additional IBM software charges for traditional (E.g. z/OS, CICS, DB2, WebSphere, et al) environment
  • Performance improvement for Linux workloads with each successive generation of IFL and System z technology
  • Linux workload on the IFL does not result in increased IBM software charges for traditional System z operating systems and middleware
  • Same functionality as a General Purpose processor on a System z server
  • HiperSockets can be used for communication between Linux images, or Linux and other operating system images on the same System z system
  • z/VM virtualization and most IBM Linux middleware products, plus most vendor software products are priced per processor (core) according to the System z IBM International Program License Agreement (IPLA).  IPLA products have a one-time-charge (OTC) and an annual (optional) maintenance charge, called Subscription & Support
  • Supported by the current z/VM virtualization and IBM Wave for z/VM software versions
  • Always a full capacity processor, independent of the capacity of the other processors in the server
  • Orderable as a System z hardware feature. The number of orderable IFL features varies by the server model and configuration
  • Designed to operate asynchronously with other General Purpose processors
  • Managed by PR/SM in logical partition with dedicated or shared processors. The implementation of an IFL requires a Logical Partition (LPAR) definition, where following normal LPAR activation procedure, LPAR defined with an IFL cannot be shared with a general purpose processor.

There will always be the debate as to which processor and associated server type (E.g. x86, POWER, SPARC) is the most cost efficient, but there is no doubt that the ability to accommodate hundreds if not thousands of zLinux instances in one zServer environmental (E.g. Power, Cooling, Floor Space, et al) friendly footprint, with software pricing per core is worthy of consideration.

Adoption for zLinux has been steady and especially in the emerging territories where it’s not unusual for zSeries deployments to be totally zLinux (I.E. IBM Enterprise Linux Server) based.  Moreover, the majority of large and traditional IBM Mainframe users (I.E. z/OS) have installed at least one IFL, if only to evaluate the z/VM and zLinux offering.  Many have deployed the IFL and associated zLinux solution for business requirements.

Therefore, if one of the major cost benefit features of IFL is optimized software costs; can the IFL processor be considered for other workloads, originating from the traditional zSeries (I.E. z/OS) environments?

Proximal Systems Corporation (PSC) is a company with a solution that transparently offloads data processing from IBM Mainframes to Distributed Systems, with an objective of reducing software cost, while maintaining or improving performance.  The company name is derived from the concept of bringing disparate computing systems into close proximity, functionally speaking, providing totally seamless and transparent interoperability.  The result is a unified computing complex within which various tasks can be easily migrated between systems to their most cost efficient operating environment, while still being able to interoperate as if they were all hosted together on the same system.

The PSC Proxy Coupling Technology allows for a CPU orientated task to be offloaded from one system to another by means of an associated proxy task, which has an identical interface as the task to be offloaded, but delegates the majority of the processing to an offloaded task on another system.  The primary objective of this function are for the cost savings and/or performance improvements that might be delivered by migrating tasks to systems that are able to execute those tasks more efficiently.

The fact that the proxy task maintains the same interface as the application being replaced is crucial; as many past Mainframe migration projects have failed due to insurmountable interoperability problems between the Mainframe and Distributed Systems servers (I.E. Windows, Linux, UNIX, et al).  Proxy Coupling Technology offers a solution to this long-standing challenge.  In theory, this allows for the transparent offload of a traditional z/OS workload (E.g. Sort) from General Purpose (GP) processors, to a less expensive (E.g. IFL) alternative…

In the first instance, the Proxy Coupling Technology offloads General Purpose CPU workload associated with the z/OS sort (I.E. CA Sort, DFSORT, Syncsort) function, to another platform (E.g. IFL).  For IFL based implementations, HyperScokets are utilized to transfer data at memory speeds from the z/OS task to zLinux on the IFL, where the sort operation completes, while the resulting z/OS task and associated data are maintained, as per normal.  From an IFL viewpoint, Ahlsort software performs the sort operation, being a sort solution that maintains compatibility with the majority of z/OS sort function (I.E. Control Card Syntax).  Therefore, this is a transparent implementation, where the only consideration is how much CPU capacity is required for the offload function (E.g. IFL, x86).  The benefits are reduced z/OS MSU usage for the sort function, which can be quite significant, as most business data (E.g. Database Offloads, Customer Orientated, et al) is sorted on a daily if not more frequent basis.

Just as IBM introduced the zAAP on zIIP capability, which allowed some customers to more easily justify a specialty engine (I.E. zIIP), combining workloads to exploit the full capability of the specialty engine; in theory the same ethos applies with the Proxy Coupling Technology.  For the avoidance of doubt, workloads that can be processed on an IFL, such as z/OS sort tasks, can assist in delivering higher Return On Investment (ROI) levels for the IFL, for example:

  • Reduced z/OS WLC MSU usage (I.E. Sort function offload) and associated software costs savings
  • IFL processors run at Full Speed and do not add to traditional workload (I.E. z/OS) software costs
  • Utilize any spare IFL CPU resource not used, releasing General Purpose CPU resource for other work

In conclusion, the Proxy Coupling Technology offers a proposition that is similar to the IBM philosophy of reducing z/OS software costs via specialty engines.  Seemingly to date, primarily only the zIIP and zAAP specialty engines were available to optimize CPU usage for z/OS workloads.  Offloading CPU cycles and thus MSU workload to IFL makes sense, utilizing a cost efficient and indeed a full power CPU engine, where for cost reasons, maybe the majority of z/OS customers don’t deploy the “highest” derivative of General Purpose CPU engine available to them.  On the face of it, the realm of possibility exists for other workloads to benefit from z/OS to IFL CPU offload, following sort, which seems to make sense as the first workload to utilize this solution.

zIIP Into The Future: Mainframe Specialty Engines Evolution

Sometimes we might lose sight that change can be evolutionary as opposed to revolutionary and this certainly applies to IBM Mainframe specialty engines, for example:

  • 1997: Internal Coupling Facility (ICF)
  • 2000: Integrated Facility for Linux (IFL)
  • 2004: System z Application Assist Processor (zAAP)
  • 2006: System z Integrated Information Processor (zIIP)

To assist with lower IBM software pricing, arguably the ICF offering became the de facto standard for a Mainframe user to be considered “actively coupled”.  Therefore deploying two or more eligible IBM Mainframes, physically attached via coupling links to a common Coupling Facility (I.E. ICF).

The Integrated Facility for Linux (IFL) is a processor dedicated to Linux workloads on IBM System z servers.  The IFL is supported by the z/VM virtualization software and the Linux operating system.  Most customers have at least dabbled into this technology, while some are using this technology extensively, primarily for distributed server consolidation.

Somehow the zAAP specialty engine has become the “black sheep” of the family where the current zEC12 and zBC12 are planned to be the last System z servers to offer support for zAAP specialty engine processors.

As of z/OS V1.11, functionality was delivered enabling zAAP eligible workloads to run on zIIP engines.  This function allowed both zIIP & zAAP-eligible workloads to process on zIIP.  This capability was ideal for customers with insufficient zAAP or zIIP eligible workload to justify a specialty engine.  Whereas the combined eligible workloads increase the ROI metrics for zIIP deployment.  The zAAP specialty engine is primarily targeted for web-based applications and SOA-based technologies, namely Java and XML.

So for z/OS type workloads, we must “zIIP Into The Future”…

Sometimes we need to look at the big picture, where the IBM organization is comprised of many business units, including the Mainframe business unit.  The Mainframe business unit itself contains many groups, including, but not limited to, the Hardware and Software groups.

As we all know, z/OS software TCO is significant and so this translates into higher revenues for the IBM Mainframe software group; but what about the IBM Mainframe hardware group?  Perhaps the specialty engines, primarily in the form of zIIP will generate revenue stream for this business unit.  Along with the introduction of zBC12 & zEC12 servers, IBM increased the zIIP to General Purpose (CP) engines ratio to 2:1; meaning you can have 2 zIIP specialty engines with the same capacity as an associated CP engine.  Previously the maximum ratio allowed was 1:1 (Specialty:CP).

What workloads are zIIP eligible?  Over time and since 2006 the amount of workload that is zIIP eligible has increased, primarily due to software development and upgrade efforts of IBM and the 3rd party ISV community:

  • DB2 for z/OS exploits the zIIP capability for portions of eligible data serving, pureXML and utility workloads
  • Other 3rd party DBMS solutions, including ADABAS & IDMS offload workload to zIIP
  • Most Systems Management tools (E.g. OMEGAMON, MAINVIEW, RMF, SYSVIEW, et al)
  • z/OS XML System Services for eligible XML validating and non-validating workloads
  • Other z/OS functions including /OS Communications Server, Global Mirror, CIM Server, et al

What are the benefits of deploying a zIIP specialty engine?

  • Lower acquisition and maintenance costs, when compared with general CP
  • zIIP engines run at full rated CP speed
  • Offload work (CPU) from General Purpose (CP) engines
  • No cost for Sub-Capacity eligible IBM software (I.E. WLC)

So, one must draw one’s own conclusions, but seemingly the deployment of zIIP engines is a “no brainer”!

Hmmm, once again, evolution is a good thing and the zIIP engine has an 8 year history and its predecessor zAAP, a 10 year history.  This ~10 year period has allowed for user experiences and IBM function developments to evolve a more stable and rounded offering and as previously stated, a product for the IBM Mainframe Hardware group to focus upon.

From a customer viewpoint, zIIP deployment requires a Capacity Planning evolution, which should be reasonably straightforward.  The big difference is the CP to zIIP offload consideration and some of the lessons learned include:

  • Software costs – Multiple-Processors; CP to zIIP Offload Rate; zIIP utilization
  • Hardware costs – Installed Books (total MSU/MIPS capacity); Additional LPAR(s)
  • Peak CPU utilization – Safeguard that zIIP exploitation reduces peak CPU usage
  • CPU per Transaction – Slight increase in CPU (not necessarily elapsed time) as workload switches from CP to zIIP
  • zIIP utilization – Early experiences indicate ~50% zIIP engine busy is a good number

In conclusion, zIIP deployment has been gradual and evolutionary, but many factors indicate that zIIP is here to stay and it is the future.  Seemingly from an IBM viewpoint, with benefit for the Mainframe Hardware Group in terms of the eradication of the zAAP engine, the increase in CP:zIIP ratio to 2:1 and the associated customer benefits of Sub-Capacity software pricing.  From a customer viewpoint, ignoring these pointers might not be wise, as z/OS software costs are significant and CPU resource requirements keep increasing.  Adding extra zIIP CPU capacity reduces hardware and associated software costs and so this is the “no brainer” observation that can’t be ignored for much longer…