System z: I/O Interoperability Evolution – From Bus & Tag to FICON

Since the introduction of the S/360 Mainframe in 1964 there has been a gradual evolution of I/O connectivity that has taken us from copper Bus & Tag to fibre ESCON and now FICON channels.  Obviously during this ~50 year period there have been exponentially more releases of Mainframe server and indeed Operating System.  In this timeframe there have been 2 significant I/O technology milestones.  Firstly, in 1990, ESCON was part of the significant S/390 announcement (MVS/ESA), where migration to ESCON was a great benefit, if only for replacing the heavy and big copper Bus & Tag channels.  Secondly, even though FICON was released in the late 1990’s, in 2009 IBM announced that the z10 would be the last Mainframe server to support greater than 240 native ESCON channels.  Similarly IBM declared that the last zEnterprise server to support ESCON channels are the z196 and z114 servers.  Each of these major I/O evolutions required a migration philosophy and not every I/O device would be upgraded to support either native ESCON of FICON channels.  How did customers achieve these mandatory I/O upgrades to safeguard IBM Mainframe Server and associated Operating System longevity?

In 2009 it was estimated ~20% of all Mainframe customers were using ESCON only I/O infrastructures, while only ~20% of all Mainframe customers were deploying a FICON only infrastructure.  Similarly ~33% of z9 and z10 systems were shipped with ESCON CVC (Block Multiplexor) and CBY (Byte Multiplexor) channels defined, while ~75% of all Mainframe Servers had native ESCON (CNC) capability.  From a dispassionate viewpoint, clearly the migration from ESCON to FICON was going to be a significant challenge, while even in this timeframe, there was still use of Bus & Tag channels…

One of the major strengths of the IBM Mainframe ecosystem is the partner network, primarily software (ISV) based, but with some significant hardware (IHV) providers.  From a channel switch viewpoint, we will all be familiar with Brocade, Cisco and McData, where Brocade acquired McData in 2006.  However, from a channel protocol conversion viewpoint, IBM worked with Optica Technologies, to deliver a solution that would allow the support for ESCON and Bus & Tag channels to the FICON only zBC12/zEC12 and future Mainframe servers (I.E. z13, z13s).  Somewhat analogous to the smartphone where the user doesn’t necessarily know that an ARM processor might be delivering CPU power to their phone, sometimes even seasoned Mainframe professionals might inadvertently overlook that the Optica Technologies Prizm solution has been or indeed is still deployed in their System z Data Centre…

When IBM work with a partner from an I/O connectivity viewpoint, clearly IBM have to safeguard that said connectivity has the highest interoperability capability with bulletproof data exchange attributes.  Sometimes we might take this for granted with the ubiquitous disk and tape subsystem suppliers (I.E. EMC, HDS, IBM, Oracle), but for FICON conversion support, Optica Technologies was a collaborative partner for IBM.  Ultimately the IBM Hardware Systems Assurance labs deploy their proprietary System Assurance Kernel (SAK) processes to safeguard I/O subsystem interoperability for their System z Mainframe servers.  Asking that rhetorical question; when was the last time you asked your IHV for site of their System Assurance Kernel (SAK) exit report from their collaboration with IBM Hardware Systems Assurance labs for their I/O subsystem you’re considering or deploying?  In conclusion, the SAK compliant, elegant, simple and competitively priced Prizm solution allowed the migration of tens if not hundreds of thousands of ESCON connections in thousands of Mainframe data centres globally!

With such a rich heritage of providing a valuable solution to the global IBM Mainframe install base, whether the smallest or largest, what would be next for Optica Technologies?  Obviously leveraging from their expertise in FICON channel support would be a good way forward.  With the recent acquisition of Bus-Tech by EMC and the eradication of the flexible MDL tapeless virtual tape offering, Optica Technologies are ideally placed to be that small, passionate and eminently qualified IHV to deliver a turnkey virtual tape solution for the smaller and indeed larger System z user.  The Optica Technologies zVT family leverages from the robust and heritage class Prizm technology, delivering an innovative family of virtual tape solutions.  The entry “Virtual Tape In A Box” zVT 3000i provides 2 FICON channel interfaces and 4 TB uncompressed internal RAID-5 disk space, seamlessly interfacing with all System z supported tape devices (I.E. 3490, 3590) and processes.  A single enterprise class zVT 5000-iNAS node delivers 2 FICON channel interfaces, NFS storage capacity from 8TB to 1PB in a single frame with standard deduplication, compression, replication and encryption features.  The zVT 5000-iNAS is available with multi-node configuration support for additional scalability and resiliency.  For those customers wishing to deploy their own choice of NFS or FC storage subsystem, the zVT 5000-FLEX allows such connectivity accordingly.

In conclusion, sometimes it’s all too easy to take some solutions for granted, when they actually delivered a tangible and arguably priceless solution in the evolution of your organizations System z Mainframe server journey from ESCON, if not Bus & Tag to FICON.  Perhaps the Prizm solution is one of these unsung products?  Therefore, the next time you’re reviewing the virtual tape market place, why wouldn’t you seriously consider Optica Technologies, given their rich heritage in FICON channel interoperability?  Given that IBM chose Optica Technologies as their strategic partner for ESCON to FICON migration, seemingly even IBM might have thought “nobody gets fired for choosing…”!

z/OS Workload Manager (WLM): Balancing Cost & Performance

A sophisticated mechanism is required to orchestrate the allocation of System z resources (E.g. CPU, Memory, I/O) to multiple z/OS workloads, requiring differing business processing priorities. Put very simply, a mechanism is required to translate business processing requirements (I.E. SLA) into an automated and equitable z/OS performance manager. Such a mechanism will safeguard the highest possible throughput, while delivering the best possible system responsiveness. Ideally, such a mechanism will assist in delivering this optimal performance, for the lowest cost; for z/OS, primarily Workload License Charges (WLC) related. Of course, the Workload Management (WLM) z/OS Operating System component delivers this functionality.

A rhetorical question for all z/OS Performance Managers and z/OS MLC Cost Managers would be “how much importance does your organization place on WLM and how proactively do you manage this seemingly pivotal z/OS component”? In essence, this seems like a ridiculous question, yet there is evidence that suggests many organizations, both customer and ISV alike, don’t necessarily consider WLM to be a fundamental or high priority performance management discipline. Let’s consider several reasons why WLM is a fundamental component in balancing cost and performance for each and every z/OS environment:

  • CPU (MSU) Resource Capping: Whatever the capping method (I.E. Absolute, Hard, Soft), WLM is a controlling mechanism, typically in conjunction with PR/SM, determining when capping is initiated, how it is managed and when it is terminated. Therefore from a dispassionate viewpoint, any 3rd party ISV product that performs MSU optimization via soft capping mechanisms should ideally consider the same CPU (E.g. SMF Type 70, 72, 99) instrumentation data as WLM. Some solutions don’t offer this granularity (E.g. AutoSoftCapping, iCap).
  • MLC R4HA Cost Management: WLM is the fundamental mechanism for controlling this #1 System z software TCO component; namely WLM collects 48 consecutive metric CPU MSU resource usage every 5 Minutes, commonly known as the Rolling 4 Hour Average (R4HA). In an ideal world, an optimally managed workload that generates a “valid monthly peak”, will fully utilize this “already paid for” available CPU MSU resource for the remainder of the MLC eligible month (I.E. Start of the 2nd day in a calendar month, to the end of the 1st day in the next calendar month). More recently, Country Multiplex Pricing (CMP) allows an organization to move workloads between System z server (I.E. CPC) structures, without cost consideration for cumulative R4HA peaks. Similarly, Mobile Workload Pricing (MWP) reporting will be simplified with WLM service definitions in z/OS 2.2. Therefore it seems prudent that real-time WLM management, both in terms of real-time reporting and pro-active decision making makes sense.
  • System z Server CPU Management: As System z server CPU chips evolve (E.g. CPU Chip Cache Hierarchy and Relative Nest Intensity), there are complementary changes to the z/OS Operating System management components. For example, HiperDispatch Mode delivers CPU resource usage benefit, considering CPU chip cache resources, intelligently allocating workload to as few logical processors as possible. It therefore follows that prioritization of workloads via WLM policy definitions becomes increasingly important. In this instance one might consider that CPU MF (SMF Type 113) and WLM Topology (SMF Type 99) are complementary reporting techniques for System z server design and management.

Since its announcement in September 1994 (I.E. MVS/ESA Version 5), WLM has evolved to become a fully-rounded and highly capable z/OS System Resources Manager (SRM), simply translating business prioritization policies into dynamic function, optimizing System z CPU, Memory and I/O resources. More recently, WLM continues to simplify the management of CPU chip cache hierarchy resources, while reporting abilities gain in strength, with topology reporting and the promise of simplified MWP reporting. Moreover, WLM resource management becomes more granular and seemingly the realm of possibility exists to “micro manage” System z performance, as and if required. Conversely, WLM provides the opportunity to simplify System z performance management, with intelligent workload differentiation (I.E. Subsystem Enclave, Batch, JES, USS, et al).

Quite simply, IBM are providing the instrumentation and tools for the 21st Century System z Performance and Software Cost Subject Matter Expert (SME) to deliver optimal performance for minimal cost. However, it is incumbent for each and every System z user to optimize software TCO, proactively implementing new processes and leveraging from System z functions accordingly.

Returning to that earlier rhetorical question about the importance of WLM; seemingly its importance is without doubt, primarily because of its instrumentation and management abilities of increasingly cache rich System z CPU chips and its fundamental role in controlling CPU MSU resource, vis-à-vis the R4HA.

Although IBM will provide the System z user with function to optimize system performance and cost, for obvious commercial reasons IBM will not reduce the base cost of System z MLC software. However, recent MLC pricing announcements, namely Country Multiplex Pricing (CMP), Mobile Workload Pricing (MWP) and Collocated Application Pricing (zCAP) provide tangible options to reduce System z MLC TCO. Therefore the System z user might need to consider how they can access real-time WLM performance metrics, intelligently combining this instrumentation data with function to intelligently optimize CPU MSU resource, managing the R4HA accordingly.

Workload X-Ray (WLXR) from zIT Consulting simplifies WLM performance reporting, enabling users to drill down into the root cause of performance variances in a very fast and easy way. WLXR assists in root cause problem determination by zooming in, starting from a high level overview, going right down to detailed Service Class performance information, such as the Performance Index (PI), showing potential bottleneck situations during peak time. Any system overhead considerations are limited, as WLXR delivers meaningful real time information on a “need to know” basis.

A fundamental design objective for WLXR is data reduction, only delivering the important information required for timely and professional workload management. Straight to the point information instead of data overload, sometimes from a plethora of data sources (E.g. SMF, System Monitors, et al). WLXR incorporates the following easy-to-use functions:

  • Simplified Data Collection & Storage: Minimal system overhead TCP/IP based agents periodically (E.g. 5, 15, 60 Minutes) collect CPU (Type 70) and WLM (Type 72) data. Performance data is stored centrally in near real-time, building a historical repository with intelligent analytics for meaningful information presentation.
  • Intelligent GUI Based Information Presentation: Meaningful decision based reports and graphs detailing CPU (E.g. MSU, R4HA, Weight) and WLM (E.g. Service Class, Performance Index, Response Time, Transaction Workflow) resource usage. A drill-down design provides a granularity of data presentation, for Management Summary to 3rd Level Technical Diagnostics use.
  • Corporate Identity Branding: A modular template design, allowing for easy corporate identity branding, with flexibility to easily add additional reports, as and if required.

Without doubt, WLM is a significant z/OS System Resources Management function, simplifying the translation of business workload requirements (I.E. Service Level Agreement) into timely and proactive allocation of major System z hardware resources (I.E. CPU, Memory, I/O). This management of System z resources has been forever thus for 20+ years, while WLM has always offered “software cost control” functionality, working with the various and evolving CPU capping techniques. What might not be so obvious, is that there is a WLM orientated price versus performance correlation, which has become more evident in the last 5 years or so. Whether Absolute Capping, HiperDispatch, Mobile Workload Pricing, Country Multiplex Pricing or evolving Soft Capping techniques, the need for System z users to integrate z/OS MLC pricing considerations alongside WLM performance based management is evident.

Historically there was not a clear and identified need for a z/OS Performance/Capacity Manager to consider MLC costs in their System z server designs. However, there is a clear and present danger that this historic modus operandi continues and there will only be one financial winner, namely IBM, with unnecessarily high MLC charges. Each and every System z user, whether large or small, can safeguard the longevity of their IBM Mainframe platform by recognizing and deploying proactive and current System z MLC cost management processes.

All too often it seems that capping can be envisaged as punitive, degrading system performance to reduce System z MLC costs. Such a notion needs to be consigned to history, with a focussed perspective on MSU optimization, where the valuable and granular MSU resource is allocated to the workload that requires such CPU resource, with near real-time performance profiling. If we perceive MSU optimization to be R4HA based and that IBM are increasing WLM function to assist this objective, CPU capping can be a benefit that does not adversely impact performance. As previously stated, once a valid R4HA peak has occurred, that high MSU watermark is available for the remainder of the MLC billing period. Similarly at a more granular level, once a workload has peaked and its MSU usage declines, the available MSU can be redirected to other workloads. With the introduction of Country Multiplex Pricing, System z users no longer need to concern themselves about creating a higher R4HA peak, when moving workloads between System z servers.

Quite simply, from the two most important perspectives, performance and cost optimization, WLM provides the majority of functionality to assist System z users get the best performance for the lowest cost. Analytics based products like Workload X-Ray (WLXR) assist this endeavour, analysing WLM data in near real-time from a performance and MLC cost perspective. It therefore follows that if this important information is also available for sophisticated MSU optimization solutions, which consider WLM performance (E.g. zDynaCap, zPrice Manager), then proactive performance and cost management follows. It’s hard to envisage how a fully-rounded MSU optimization decision can be implemented in near real-time, from an MSU optimization solution that does not consider WLM performance metrics…

System z MLC Pricing Increases: Look After The Pennies…

Recently IBM announced ~4% price increases in z/OS Monthly License Charges (MLC) for selected Operating System and Middleware software programs and associated features. Specifically, price increases will apply to the VWLC, AWLC, EWLC, AEWLC, PSLC, FWLC and TWLC pricing metrics. Notably, SDSF price increases will be ~20% with Advanced Function Printing (AFP) product price increases of ~13-24%. In a global economy where inflation rates for The USA and Western Europe are close to 0%, one must draw one’s own conclusions accordingly. Lets’ not forget that product version changes typically have an associated price increase. From a contractual viewpoint, IBM only have to provide 90 days advance notice for such price changes, in this instance, IBM provided 150+ days advanced notice.

Price increases are inevitable and as always, it’s better to be proactive as opposed to reactive to such changes. As always, the old proverbs always make good sense and in this instance, “look after the Pennies and the Pounds will look after themselves”! This periodic IBM price increase is inevitable, but is not the underlying issue for controlling System z software costs. For many years, since 1994 to be precise, when IBM introduced Parallel Sysplex License Charges (PSLC), the need for IBM Mainframe users to minimize MSU usage has been of high if not critical importance. Nothing has changed in this 20+ year period and even though IBM might have introduced Sub-Capacity and specialty engines to minimize chargeable MSU usage, has each and every System z user optimized their MSU usage? Ideally this would not be a rhetorical question, rather being a “Golden Rule”, where despite organic CPU capacity increases of ~10% per annum, a System z environment could maintain near static IBM MLC software costs.

I have written several blog entries and presented on this subject matter over the years, for example:

The simple bottom line is that System z MLC software accounts for ~20-35% of the overall System z TCO, typically being the #1 expenditure item. For that reason alone, it’s incumbent for each and every System z user to safeguard they have the technical and commercial skills in place to manage this cost item, not as an afterthought, but inbuilt into each and every System z process, from application design, through to that often neglected afterthought, application tuning.

Many System z organizations might try to differentiate between a nuance of System and Application tuning, but such a “not my problem” type attitude is not acceptable and will be imposing a significant financial burden on each and every organization.

A dispassionate and pragmatic approach is required for optimizing System z CPU usage. In this timeframe, let’s examine the ~20% SDSF price increase. IBM will quite rightly state that in conjunction with their z/OS 2.2 release, there are significant SDSF product function advancements, including zIIP offload, REXX interoperability and increased information delivery. However are such function improvements over and above the norm and not expected as a Business As Usual (BAU) product improvements, which should be included in the Service & Support (S&S) or Monthly License Charges (MLC) paid for software?

In October 2013 I wrote a blog entry; Mainframe ISV Software: Is Continuous Product Improvement Always Evident? The underlying message was that an ISV should deliver the best product they can, for each and every release, without necessarily increasing software costs. In this particular instance, the product was an SDSF equivalent, namely (E)JES, which many years ago delivered all of the function incorporated in SDSF for z/OS 2.2, but for a fraction of the cost…

As of 1 November 2015, IBM will start billing cycles for Country Multiplex Pricing (CMP), which requires the October 2015 version of SCRT, namely V23R10. A Multiplex is defined as a collection of all System z servers in one country, measured as one System z server for software sub-capacity reporting. Sub-Capacity program utilization peaks across the Multiplex will be measured, as opposed to separate peaks by System z servers. CMP also provides the flexibility to move and run workloads anywhere with the elimination of Sysplex aggregation pricing rules.

Migrating to CMP is focussed on CPU capacity growth and flexibility going forward. Therefore System z users should not expect price reductions for their existing workloads upon CMP deployment. Indeed there are CMP deployment considerations. A CMP MSU baseline (base) needs to be established, where this MSU Base and associated MLC Base Factor is established for each sub-capacity MLC product and each applicable feature code. These MSU and MLC bases represent the previous 3 Month averages reported by SCRT before commencing CMP. Quite simply, to gain the most from CMP, the System z user must safeguard that their R4HA for each and every MLC product is optimized, before setting the CMP baseline, otherwise CMP related cost savings going forward are likely to be null.

From a very high-level management viewpoint, we must observe that IBM are a commercial organization, and although IBM provide mechanisms for controlling cost going forward, only the System z user can optimize System z MLC cost for their organization. Arguably with CMP, Soft-Capping isn’t a consideration, it’s mandatory.

Put very simply, each and every System z user can safeguard that they look after the Pennies (Cents) and the Pounds (Euros, Dollars) will look after themselves by paying careful attention to System z MLC software costs. Setting a baseline of System z MLC costs is mandatory, whether for the first time, or to set a new baseline for CMP deployment. Maintaining or lowering this System z MLC cost baseline should or arguably must be the objective going forward, even when considering 10% organic CPU growth, each and every year. System z decision-makers and managers must commit to such an objective and safeguard the provision of adequately skilled personnel to optimize such a considerable TCO cost line item (I.E. MLC @ ~20-35% of System z TCO). In an ecosystem with technical resources including DBA, Systems Programmer, Capacity Planner, Application Personnel, Performance Tuning, et al, why wouldn’t there be a specialist Software Cost Manager?

Let’s consider how even an inexperienced System z user can maintain a baseline of System z MLC costs, even with organic CPU capacity growth of 10% per annum:

  • System z Server Upgrade: Higher specification CPU chips or Technology Transition Offering (TTO) pricing metrics deliver 10%+ cost per MSU benefits.
  • System z Specialty Engines: Over time, more and more application workload can be offloaded to zIIP processors, with no sub-capacity MLC software charges.
  • System z Software Version Upgrades: Major subsystems such as CICS, DB2, IMS, MQSeries and WebSphere deliver opportunity to lower cost per MSU; safeguard such function exploitation.
  • Application Tuning: Whether SQL, COBOL, Java, et al, or the overall I/O subsystem, safeguard that latest programming techniques and I/O subsystem functions are exploited.
  • New Application Deployment: As and when possible, deploy new or convert existing workloads to benefit from the optimal MLC pricing metric; previously zNALC, nowadays zCAP.
  • Technical & Commercial Skills Currency: Safeguard personnel have the latest System z software pricing knowledge, ideally from an independent 3rd party such as Watson & Walker.

In conclusion, as householders we have the opportunity to optimize our cost expenditure, choosing and switching between various major cost items such as financial, utility and vehicle products. As System z users, we don’t have that option, only IBM provide System z servers and associated base architecture, namely the most expensive MLC software products, z/OS, CICS, DB2, IMS and WebSphere/MQ. However, just as we manage our domestic budgets, reducing power usage, optimizing vehicle TCO and getting more bang from our buck for financial products various, we can and must deliver this same due diligence for our System z MLC TCO. With industry averages of ~$500-$1000 per MSU for z/OS MLC software and associated annual expenditure measured in many millions, why wouldn’t any System z user look to deliver 10%+ cost per MSU optimization, year-on-year for their organization?

Clearly the cost of doing nothing in this instance, is significant, measured in magnitudes of millions, each and every year. Hence for System z MLC TCO optimization, looking after the Pennies is more than worthwhile, while the associated benefit of the Pounds, Euros or Dollars looking after themselves is arguably priceless.

Java: Is System z A Viable Server Platform?

As long ago as 1997, IBM integrated Java into their IBM Mainframe platform, in those days via the then flagship OS/390 Operating System. As with any new technology, perhaps the initial OS/390 Java integration offerings were not perfect, but some ~20 years later, a lot has changed…

In 2000, IBM Java SDK 1.3.1 delivered z/OS and Linux on z support, quickly followed by 64-bit z/OS support in 2003 via SDK 1.4. In 2004 Java Virtual Machine (JVM) and JIT (Just-In-Time) compiler technology support was provided, while Java code has always exploited IBM specialty engines, primarily zAAP initially and now via zIIP and the zAAP on zIIP capability. Put simply, IBM continues to invest aggressively in Java for System z, demonstrating a history of innovation and performance improvements, up to and including the latest z13 server.

So why should a 21st century business consider the System z platform for Java workloads?

Arguably the primary reason is a rapidly emerging requirement for the true 24*7*365 workload, which cannot accommodate a batch window, where Java is ideally placed to serve both batch and OLTP workloads. Put another way, the need to process batch work has not gone away, whereas a requirement to process batch work concurrently with OLTP services has emerged. Of course, traditionally the typical System z enterprise might have two sets of IT staff for OLTP and batch workloads, typically in the IT Support and Application Management teams, whereas via Java and a workload centric approach, separate batch and OLTP support personnel are not necessarily required.

For the System z platform, Java support has always been incorporated into the core architectural building blocks, namely z/OS, CICS, DB2, IMS, WebSphere, Batch Runtime, et al. Therefore there are no functional reasons why new applications or indeed existing applications cannot be engineered using the pervasive Java programming language and deployed on the System z platform.

Quite simply, Java is a critically important language for IBM System z. Java has become foundational for data serving and transaction serving, the traditional strengths of IBM System z. WebSphere applications written in Java and processing via System z, benefit from a key advantage through co-location. This delivers better response times, greater throughput and reduced system complexity when driving CICS, DB2 and IMS transactions.

Java is also critical for enabling next generation workloads in the IBM defined Cloud, Analytics, Mobile & Security (CAMS) framework. Cloud and mobile applications can access z/OS data and transactions via z/OS Connect and other WebSphere solutions, all inherently Java based. Java on System z also provides a full set of cryptographic functions to implement secure solutions. A key strength of Java applications is the ability to immediately benefit from the latest hardware performance improvements using the Just In-Time (JIT) compiler incorporated in the latest IBM Java SDK releases.

Let’s not forget, there are many other good reasons why Java might be considered as a viable application programming language:

  • Personnel Skills Availability: Java is typically ranked in the top 3 of most widely used programming languages; therefore personnel availability is abundant and cost efficient.
  • Application Code Portability: Recognizing Java bytecode and associated JVM functionality, no matter what the platform (E.g. Wintel, X86 Linux, UNIX, z/OS, Linux on System z, et al), the Java application code should process without consideration.
  • Application Tooling Support: Application Development tools have evolved to the point of true platform independence, where Application Programmers just create their code, they don’t necessarily know or sometimes care, where that code will execute. Let’s not forget the simplification of Java code for OLTP and batch workloads, reducing associated IT lifecycle support costs.
  • TCO Efficiencies: Simplified Application Development and deployment reduces associated cost, while reducing implementation time for mission-critical workloads. Java exploitation of the zAAP (zAAP on zIIP) safeguards low software costs and optimized processing times (I.E. Sub-Capacity specialty engines run at full speed).

With the announcement of the zEC12 server, notable Java enhancements included:

  • Hardware Transaction Memory (HTM) – Better concurrency for multi-threaded applications
  • Run-Time Instrumentation (RI) – Innovation of a new hardware facility designed for managed runtimes
  • 2 GB Page Frames – Improved performance targeting 64-bit heaps
  • Pageable 1 MB Large Pages (Flash Express) – Better versatility of managing memory
  • New Software Hints/Directives – Data usage intent improves cache management; Branch pre-load improves branch prediction
  • New Trap Instructions – Reduce implicit bounds/null checks overhead

In summary, System z users can expect up to 60% throughput performance improvement amongst Java workloads measured with zEC12 and the IBM Java 7 SR3 SDK.

IBM z13 and the IBM Java 8 SDK deliver improved Java performance, including Single Instruction Multiple Data (SIMD) vector engine, Simultaneous Multi-Threading (SMT) and improved CP Assist for Cryptographic Function (CPACF). Delivering up to 2X improvement in throughput-per-core for security-enabled applications and up to 50% improvement for other generic applications.

Other z13 Java functional and performance improvements include:

  • Secure Application Serving – Application serving with Secure Socket Layers (SSL) will exploit the new Java 8 Clear Key CPACF and SIMD vector instructions for string manipulation. An additional 75% performance improvement for Java 8 on z13 with SMT versus Java 8 on zEC12.
  • Business Rules Processing – Business rules processing with Java 8 takes advantage of the SIMD vector instructions and SMT for zIIP specialty engines on z13 to achieve significant improvements in throughput-per-core. An additional 37% performance improvement from z13 SMT zIIPs with Java 8 versus Java 8 on zEC12.
  • Specific z/OS Java 8 Exploitation of z13 SIMD – Java 8 exploits the new z13 SIMD vector hardware instructions for Java libraries and functions. These SIMD vector hardware instructions on z13 for improved performance, where specific idioms/operations were improved by between 2X and 60X. Performance benefits for real life Java applications will be dependent on how frequently these idiom/operations are used.

In conclusion, the IBM commitment to Java on System z is clearly evident and the cost, performance and security proposition becomes compelling on the latest zEC12 and z13 Mainframe servers. The pervasive deployment of Java as a universal IT programming language dictates that programmer availability will never be an issue, and platform independence dictates that Java applications can be created and processed on any platform. Let’s not forget, the strong single thread performance and I/O scalability of System z as a significant differentiator when comparing Java performance on any IT platform.

Moreover, as always, perhaps the business dictates what platform is the most suitable for business applications. The evolution to a combined OLTP and batch workload for the 21st Century 24*7*365 mission critical business application, ideally places Java as an eminently viable programming language. Therefore there is no requirement to reengineer any existing System z application, or to find an alternative platform for new business functions. As always, the System z Mainframe platform should never be overlooked…

z13 WLC Software Pricing Updates: Are You Ready?

Along with the z13 hardware announcement were several very obvious WLC pricing announcements, but more importantly, two hidden Statements Of Direction (SOD) or pre-announcements.

I guess we can all remember the “zSeries Technology Dividend” where put simply, when upgrading zSeries servers, users would benefit from a ~10%+ software price versus performance benefit.  Does anybody still remember the IBM Mainframe Charter from 2003?  That was the document that first referenced this price/performance benefit, which became known as the “technology dividend”.  Specifically, this document stated:

IBM lowered MSU values incorporated in the z990 microcode by approximately 10 percent, resulting in IBM software savings for IBM zSeries software products with MSU-based pricing.  These reduced MSUs do not indicate a change in machine performance. Superior performance and technology within the z990 has allowed IBM to provide improved software prices for key IBM zSeries operating system and middleware software products.

Put really simply, for z990, z9 and z10 server upgrades, IBM delivered this ~10% benefit with faster CPU chips.  Therefore, no noticeable impact on Software Pricing, Capacity Planning or Performance Measurement processes.  However, with the z196/z114, this ~10% benefit could no longer be delivered by CPU chip hardware speed enhancements.  To compensate, IBM introduced the Advanced Workload License Charges (AWLC) pricing regime.  AWLC is an evolution of the Variable (VWLC) pricing regime, lowering per MSU costs for WLC eligible products (E.g. z/OS, CICS, DB2, IMS, WebSphere/MQ, et al).  Hence delivering the ~10% price/performance benefit when upgrading from a z10 to a z196 or z114 (AEWLC) server.

Of course, when upgrading to the zEC12 or zBC12, further refinement of AWLC pricing was required, to deliver this the ~10% price/performance benefit.  Hence, IBM introduced the AWLC Technology Transition Offerings (TTO), lowering AWLC prices for zXC12 and now z13 zSeries servers.

For z13, IBM announced the following z13 AWLC Technology Transition Offerings:

  • Technology Update Pricing for the IBM z13 (TU3): When stand-alone z13 servers are priced with AWLC, or when all the servers in an aggregated Sysplex or Complex are z13 servers priced with AWLC, these servers receive a reduction to AWLC pricing which is called.  Quantity of z13 Full Capacity MSUs for a stand-alone server, or the sum of Full Capacity MSUs in an actively coupled Parallel Sysplex or Loosely Coupled Complex made up entirely of z13 servers.  AWLC discounts range from 4% (4-45 MSU) to 14% (5477+ MSU).
  • AWLC Sysplex Transition Charges (TC2): When two or more machines exist in an aggregated Sysplex or Complex & at z13, zEC12, or zBC12 server & at least one is a z196 or z114 server, with no older technology machines included, they will receive a reduction to AWLC pricing across the aggregated Sysplex or Complex. This reduction provides a portion of the benefit related to the Technology Update Pricing for AWLC (TU1) based upon the proportion of zEC12 or zBC12 server capacity in the Sysplex or Complex.  AWLC discounts range from 0.5% (0-20% z13/zXC12 MSU) to 4.5% (81%-<100% z13/zXC12 MSU).
  • AWLC Sysplex Transition Charges (TC3): When two or more machines exist in an aggregated Sysplex or Complex & at least one is a z13 server & at least one is a zEC12 or zBC12 server, with no older technology machines included, they will receive a reduction to AWLC pricing across the aggregated Sysplex or Complex. This reduction provides a portion of the benefit related to the IBM z13 TU3 offering, based on the total Full Capacity MSU of all z13, zEC12, & zBC12 Machines in the Sysplex or Complex.  AWLC discounts range from 2.8% (4-45 MSU) to 9.8% (5477+ MSU).

These AWLC software pricing announcements are Business As Usual (BAU) and to be expected, but if we dig slightly deeper into the z13 announcements, we will find two other pre-announcements of interest!

Since introducing sub-capacity and WLC pricing regimes, IBM have continually evolved zSeries software sub-capacity pricing mechanisms, with zNALC, AWLC, IWP and more recently MWP offerings.  From a generic viewpoint, with the exception of zNALC, a niche new workload price offering, these pricing announcements did not challenge the “status quo”, where aggregated MSU and large LPAR structures were the ideal.  So why might the upcoming z13 (E.g. Q2 2015) pricing announcements be of note?  Primarily because they challenge the notion of having separate structural entities (I.E. Sysplex Coupled zSeries Servers & LPARS) for existing and new workloads.

Country Multiplex Pricing (CMP): A major evolution, essentially eliminating prior Sysplex pricing rules, requiring that systems be interconnected and/or sharing the same data in order to be eligible for aggregation of MLC software pricing charges.  A Multiplex is defined as the collection of all z Systems within a country.  Therefore, sub-capacity usage will be measured & reported as a single machine, regardless of the connectivity or data sharing configurations.  A new sub-capacity reporting tool is being implemented & clients should expect a transition period as the new pricing model is implemented.  This should allow flexibility to move & run work anywhere, eradicating multiple workload peaks when workloads move between machines.  Ultimately the cost of growth is reduced with one price per product based on MLC capacity growth anywhere in the country.CMP should facilitate for flexible deployment and movement of business workloads between all zSeries Servers located within a country, without impacting MLC billing.  For the avoidance of doubt, this will assist the customer in safeguarding they don’t encounter duplicate MLC peaks as a result of moving an LPAR workload from one zSeries Server to another.  It also removes all Sysplex aggregation considerations, Single Version Charging (SVC) time limits and Cross Systems Waivers (CSW).  Most notably, the cost per MSU for additional capacity will be optimized, being based upon total Multiplex MSU capacity.

IBM Collocated Application Pricing (ICAP): Previously, new applications (zNALC) required a separate LPAR to avoid increases in other MLC software charges.  ICAP facilitates new eligible applications be charged as if they are running in a dedicated environment.  Technically they are integrated with other (non-eligible) workloads.  Software supporting the new application will not impact the charges for other MLC software collocated in the same LPAR.  ICAP appears as an evolution of the Mobile Workload Pricing (MWP) for z/OS pricing mechanism.  ICAP will use an enhanced MWRT, implemented as a z/OS application.  ICAP applies to z13, zXC12, z196/z114 servers.  IBM anticipates that ICAP will deliver zNALC type price benefit, discounting ~50% of ICAP eligible software MSU.

Seemingly IBM have learned from the lessons of IWP, where at first glance, software discounts were attractive, but not at the cost of a separate LPAR.  From a reporting viewpoint, there are similarities to Mobile Workload Pricing for z/OS (MWP), but most notably, pricing is largely zNALC based.  Therefore collocating new workloads in the same LPAR as existing workloads, but with the best price performance of any pricing regime, except zNALC, which is a niche and special edition software pricing metric.

In conclusion, CMP and ICAP are notable WLC pricing regime updates, because they do challenge the status quo of MSU aggregation via Sysplex coupled servers and the ability to collocate new and existing workloads in the same LPAR.  On the one hand, simplified pricing considerations from a granular per MSU cost viewpoint.  However, to optimize price versus performance, arguably the savvy Data Centre will now require a higher level of workload management, safeguarding optimum MSU capacity usage and associated performance.

zPrice Manager is an evolution of the typical soft-capping approach, which can be IBM function based, namely Defined Capacity (DC) or Group Capacity Limit (GCL), or ISV product based.  ISV products typically allow MSU management with dynamic MSU capacity resource management between LPAR, LPAR Group & CPC structures, ideally with Workload Manager (WLM) interaction.  If plug & play simple MSU management is required, these traditional IBM or 3rd party ISV approaches will still work with CMP and ICAP, but will they maximize WLC TCO?

The simple answer is no, because CMP allows the movement of workloads between zSeries Servers.  Therefore if WLC product (I.E. z/OS, CICS, DB2, IMS, WebSphere/MQ) pricing is to be country wide, and optimum WLM performance is to be maintained, a low level granularity of MSU management is required.

zPrice Manager from zIT Consulting allows this level of WLC software product management, with a High Level REXX programmatic interface, and the ability to store real life MSU profile data as callable REXX variables.  Similar benefits apply to ICAP workloads, where different WLM policies might be required for the same WLC product, deployed on the same collocated workload LPAR.  Therefore the savvy data centre will safeguard they optimize MSU TCO via MWP and/or ICAP pricing regimes, without impacting business application performance.

In conclusion, the typical z13 AWLC software pricing updates are Business As Usual (BAU) and can be implemented, as and when required and without consideration.  Conversely, CMP and ICAP can deliver significant future benefit and should be considered in zSeries Server capacity planning forecasts.

Bottom Line Recommendation: Each and every zSeries Server user, whether large or small, should initiate contact with their IBM account teams, for CMP and ICAP briefings, allowing them to consider how they might benefit from these new WLC software pricing regimes.

Are You Ready For z/OS Mobile Workload Pricing (MWP)?

Recently IBM announced Mobile Workload Pricing (MWP) for z/OS which can minimize the impact of mobile workloads on Sub-Capacity license charges, delivering optimized pricing for System z environments extending their workloads to incorporate mobile devices.

MWP only applies to Mainframe customers deploying a zEC12 or zBC12 in their enterprise, as per the AWLC or AEWLC (AKA Advanced/Entry Workload License Charges) metric; MWP is also extended if a zEC12 or zBC12 enterprise is deploying a z196 or z114 via the AWLC or AEWLC metric.

The primary consideration for MWP is determining how a Mainframe customer can comply with the tracking requirements for mobile workloads.  On the plus side, MWP does not require an isolation of mobile workload transactions in separate LPARs, using enhanced reporting for software pricing.  This is a major step forward when compared with Integrated Workload Pricing (IWP), which ideally requires large LPAR container structures, minimizing costs for WebSphere workloads, applying to the CICS, IMS and WebSphere MLC software products.  Conversely, MWP includes DB2 in the list of eligible software products for cost reduction.

If a Mainframe customer is eligible for MWP pricing they will then need to utilize the Mobile Workload Reporting Tool (MWRT), which is analogous to the original Sub-Capacity Reporting Tool (SCRT).  This is an either/or situation, the Mainframe customer only submits MCRT reports to IBM if they’re MWP eligible, or the status quo remains, where non-MWP Mainframe customers continue to submit SCRT reports.

The Mainframe customer must track and report General Purpose (GP) CPU time for mobile transactions, reporting those values in a pre-defined format to IBM each month to benefit from MWP.  MWRT utilizes reported mobile transaction data to adjust the Rolling 4 Hour Average (R4HA) Sub-Capacity software eligible MSUs, with LPAR granularity.  Optimizing mobile transactions impact for peak LPAR MSU values delivers benefit when higher mobile transaction volumes generate MSU resource usage peaks (Workload Spikes).

MWRT calculates the R4HA for mobile transaction GP MSU resource usage, subtracting 60% of those values from the traditional Sub-Capacity software eligible MSU metric, with LPAR granularity, for each and every reporting hour.  The software program values for the same hour are aggregated for all Sub-Capacity eligible LPARs, deriving an adjusted Sub-Capacity value for each reporting hour.  Therefore MWRT determines the billable MSU peak for a given MLC software program on a CPC using the adjusted MSU values.

Most committed zSeries Mainframe customers will be deploying CICS, DB2 and WebSphere software, while IT trends dictate that mobile device usage (I.E. Smartphone, Tablet, et al) is increasing.  Therefore most z/OS applications that require such mobile access have evolved accordingly over time.  Therefore it seems to be one of those “No Brainer” type scenarios, where the Mainframe user should plan to benefit from MWP, either as they upgrade to the latest zSeries technology, namely zEC12 or zBC12, or immediately if already deploying a zEC12 or zBC12 server.

The only minor consideration is a requirement for the zEC12 or zBC12 customer to engage their local IBM account team, to determine what data they need to report on mobile transactions for MWP consideration.  This one off task will deliver optimized WLC pricing forever more.

Of course IBM are encouraging customers to consider the Mainframe for new applications, driven by mobile transaction requirements.  Equally, there is no reason why longer term Mainframe customers can’t benefit from MWP, benefitting from reduced MLC costs, a major consideration of Mainframe TCO.

zIIP Into The Future: Mainframe Specialty Engines Evolution

Sometimes we might lose sight that change can be evolutionary as opposed to revolutionary and this certainly applies to IBM Mainframe specialty engines, for example:

  • 1997: Internal Coupling Facility (ICF)
  • 2000: Integrated Facility for Linux (IFL)
  • 2004: System z Application Assist Processor (zAAP)
  • 2006: System z Integrated Information Processor (zIIP)

To assist with lower IBM software pricing, arguably the ICF offering became the de facto standard for a Mainframe user to be considered “actively coupled”.  Therefore deploying two or more eligible IBM Mainframes, physically attached via coupling links to a common Coupling Facility (I.E. ICF).

The Integrated Facility for Linux (IFL) is a processor dedicated to Linux workloads on IBM System z servers.  The IFL is supported by the z/VM virtualization software and the Linux operating system.  Most customers have at least dabbled into this technology, while some are using this technology extensively, primarily for distributed server consolidation.

Somehow the zAAP specialty engine has become the “black sheep” of the family where the current zEC12 and zBC12 are planned to be the last System z servers to offer support for zAAP specialty engine processors.

As of z/OS V1.11, functionality was delivered enabling zAAP eligible workloads to run on zIIP engines.  This function allowed both zIIP & zAAP-eligible workloads to process on zIIP.  This capability was ideal for customers with insufficient zAAP or zIIP eligible workload to justify a specialty engine.  Whereas the combined eligible workloads increase the ROI metrics for zIIP deployment.  The zAAP specialty engine is primarily targeted for web-based applications and SOA-based technologies, namely Java and XML.

So for z/OS type workloads, we must “zIIP Into The Future”…

Sometimes we need to look at the big picture, where the IBM organization is comprised of many business units, including the Mainframe business unit.  The Mainframe business unit itself contains many groups, including, but not limited to, the Hardware and Software groups.

As we all know, z/OS software TCO is significant and so this translates into higher revenues for the IBM Mainframe software group; but what about the IBM Mainframe hardware group?  Perhaps the specialty engines, primarily in the form of zIIP will generate revenue stream for this business unit.  Along with the introduction of zBC12 & zEC12 servers, IBM increased the zIIP to General Purpose (CP) engines ratio to 2:1; meaning you can have 2 zIIP specialty engines with the same capacity as an associated CP engine.  Previously the maximum ratio allowed was 1:1 (Specialty:CP).

What workloads are zIIP eligible?  Over time and since 2006 the amount of workload that is zIIP eligible has increased, primarily due to software development and upgrade efforts of IBM and the 3rd party ISV community:

  • DB2 for z/OS exploits the zIIP capability for portions of eligible data serving, pureXML and utility workloads
  • Other 3rd party DBMS solutions, including ADABAS & IDMS offload workload to zIIP
  • Most Systems Management tools (E.g. OMEGAMON, MAINVIEW, RMF, SYSVIEW, et al)
  • z/OS XML System Services for eligible XML validating and non-validating workloads
  • Other z/OS functions including /OS Communications Server, Global Mirror, CIM Server, et al

What are the benefits of deploying a zIIP specialty engine?

  • Lower acquisition and maintenance costs, when compared with general CP
  • zIIP engines run at full rated CP speed
  • Offload work (CPU) from General Purpose (CP) engines
  • No cost for Sub-Capacity eligible IBM software (I.E. WLC)

So, one must draw one’s own conclusions, but seemingly the deployment of zIIP engines is a “no brainer”!

Hmmm, once again, evolution is a good thing and the zIIP engine has an 8 year history and its predecessor zAAP, a 10 year history.  This ~10 year period has allowed for user experiences and IBM function developments to evolve a more stable and rounded offering and as previously stated, a product for the IBM Mainframe Hardware group to focus upon.

From a customer viewpoint, zIIP deployment requires a Capacity Planning evolution, which should be reasonably straightforward.  The big difference is the CP to zIIP offload consideration and some of the lessons learned include:

  • Software costs – Multiple-Processors; CP to zIIP Offload Rate; zIIP utilization
  • Hardware costs – Installed Books (total MSU/MIPS capacity); Additional LPAR(s)
  • Peak CPU utilization – Safeguard that zIIP exploitation reduces peak CPU usage
  • CPU per Transaction – Slight increase in CPU (not necessarily elapsed time) as workload switches from CP to zIIP
  • zIIP utilization – Early experiences indicate ~50% zIIP engine busy is a good number

In conclusion, zIIP deployment has been gradual and evolutionary, but many factors indicate that zIIP is here to stay and it is the future.  Seemingly from an IBM viewpoint, with benefit for the Mainframe Hardware Group in terms of the eradication of the zAAP engine, the increase in CP:zIIP ratio to 2:1 and the associated customer benefits of Sub-Capacity software pricing.  From a customer viewpoint, ignoring these pointers might not be wise, as z/OS software costs are significant and CPU resource requirements keep increasing.  Adding extra zIIP CPU capacity reduces hardware and associated software costs and so this is the “no brainer” observation that can’t be ignored for much longer…

The IBM Mainframe – 50 Years & Counting

On 7 April 1964 IBM announced the System/360, which is now recognized as the first IBM Mainframe computer system.  IBM Board Chairman Thomas J. Watson Jr. called the event the most important product announcement in the company’s history.  At a press conference at the IBM Poughkeepsie facilities, Mr. Watson said:

“System/360 represents a sharp departure from concepts of the past in designing and building computers. It is the product of an international effort in IBM’s laboratories and plants and is the first time IBM has redesigned the basic internal architecture of its computers in a decade. The result will be more computer productivity at lower cost than ever before. This is the beginning of a new generation, not only of computers, but of their application in business, science and government.”

More than 100,000 businessmen in 165 American cities today attended meetings at which System/360 was announced.  50 years later, I wonder whether there are 100,000 people that work with the IBM Mainframe in The USA and maybe globally…

During this 50 year evolution, the IBM Mainframe has seen opinion polarize, sometimes from the same person:

  • In March 1991, Stewart Alsop stated “I predict that the last mainframe will be unplugged on March 15, 1996.”
  • In February 2002, Stewart Alsop stated “It’s clear that corporate customers still like to have centrally controlled, very predictable, reliable computing systems, exactly the kind of systems that IBM specializes in.”

Obviously the IBM Mainframe server is still here and just like in 1964, in the early 1990’s it did evolve into just another server on the distributed network and the use of routers, incorporating POSIX compliance and so on…

As we all know, the IBM Mainframe has always evolved, continues to evolve and in theory, and often in real-life, can run any workload.

Let’s reprise some of the notable IBM Mainframe models and associated functions since April 1964:

Family Name Announced Notable Function Introduction
S/360 April 1964 24-bit addressing (32-bit architecture)
S/360 August 1965 Virtual storage
S/360 January 1968 High speed cache
S/370 June 1970 Disk & printer support
S/370 August 1972 Virtual storage & multi-processor support
S/370 XA June 1983 Extended storage 24-bit/31-bit addressing
S/390 ESA September 1990 ESA & OS/390 operating systems
zSeries (zArchitecture) October 2000 z operating systems, 24/31/64-bit   addressing supported concurrently
zSeries z9 EC July 2005 zIIP specialty engine
zSeries z10 EC February 2008 High capacity/performance (quad core CPU chip)
z196 (zEnterprise) July 2010 96-way core design & distributed systems integration (zBX)
zEC12 August 2012 Integrated platform for cloud computing, integrated OLTP & data warehousing

It’s interesting to note that the purchase price of an IBM mainframe is about the same, comparing 1964 to 2014, let’s say~$100,000.  Of course, you can’t compare the feeds and speeds of these machines, they’re exponentially different.  However, just as the S/360 in 1964 played a pivotal part in shaping data processing for that decade, subsequent evolutions of the IBM Mainframe follow in that tradition, lowering the cost of IT and simplifying business management.

I’m sure a lot of us have enjoyed our time working with the IBM Mainframe server and long may that be the case, for future generations of IT professionals.

Cloudy With A Chance Of Mainframe?

With the advent of Computer Generated Imagery (CGI) there is seemingly no end to the number of books, especially “children’s” books that can be encapsulated and delivered in animated movie format.  I’m always surprised and arguably never surprised by the messaging in these stories; supposedly written for the younger person, but invariably delivering a message of good morals, ethics and human qualities, typically finding creative solutions to a myriad of problems.  Of course, we’re all human, and typically as human beings, we’re responsible for the majority of our problems, either knowingly, or not.

Cloudy with a Chance of Meatballs is a book based on a town named Chewandswallow characterized by its strange daily meteorological pattern, providing townsfolk with all of their required daily meals by raining food.  Although the residents of the town enjoy a lifestyle devoid of any grocery shopping or cookery, the weather unexpectedly and inexplicably takes a turn for the worse, devastating the local community with destructive and uncontrollable storms of either unpleasant or dangerously oversized foods, resulting in unstoppable catastrophes for the townspeople.  Their lives endangered by the threats of the storms, they relocate to a different community of average meteorological patterns, safe from the hazards that once were presented by raining meals.  However, they are forced to learn how to obtain food the normal way.

So what?  Continuing with the creativity thought, the ethos of this story might be somewhat analogous to the sometimes polarized opinion between Distributed Systems and Mainframe computing.  So depending on your philosophical bent or which side-of-the-fence you sit, there is only one choice, even if this seemingly perfect and de facto world is generating significant challenges… 

Recently, z/OS 2.1 became Generally Available (GA) and most notably from my viewpoint was its continued and demonstrable ability to participate in cloud computing environments.  So is the IBM Mainframe ready for the cloud?  Wasn’t it always!

The fundamental ethos of the Mainframe environment is virtualization and was forever thus.  The Mainframe has always shared the basic IT architecture components, including CPU, Memory, Storage, Networking and other peripherals, originally in a physical single-image structure, but since the late 1990’s in a shared (SYSPLEX) complex of interconnected physical servers (CPCs).  So the Mainframe is and always has been ready for “Prime Time Cloud”!

z/OS V2.1 is a platform designed to dynamically respond and scale to workload change with enhancements to scalability and performance that cover operations, I/O, virtual storage constraint relief, memory management, and more.  These enhancements are suitable for organizations that would like to catalyse a journey to highly scalable virtualized solutions like cloud.

IBM delivers improved scalability and performance for outstanding throughput and service within existing Mainframe environments.  Smarter scalability can better prepare the user for growth and spikes in workloads while maintaining the qualities of service and balanced design that customers have come to expect of the IBM mainframe.

As customers consider all the components of downtime, the true costs can be surprising, which is why superior availability continues to remain a key factor in platform selection. With z/OS V2.1, IBM introduces new capabilities designed to improve upon the already legendary z/OS system availability.  The industry-leading resiliency and high availability of System z remain key reasons why organizations keep their most critical processing on System z.  With its attention to outage reduction, the availability of System z and z/OS is well recognized in the industry.  In z/OS V2.1, IBM continues enhancements that improve critical IT systems availability, helping achieve an even higher level of service for customers.

Some of the “cloud friendly” z/OS 2.1 benefits include:

  • Support for Shared Memory Communications-RDMA (SMC-R), for low latency, application transparent communications to help you move data quickly between z/OS images on the same CPC or between CPCs.
  • Flash Express support for certain coupling facility list structures, such as IBM WebSphere MQ for z/OS, V7 (5655-R36), in order to strengthen resiliency for enterprise messaging workload spikes.
  • For zEC12 or zBC12 systems, shared engine coupling facilities can be used in many production environments, for improved economics by offering a high level of performance without requiring the use of dedicated CF engines.
  • EXCP support for System z High-Performance FICON (zHPF) is designed to help improve I/O start rates and improve bandwidth for more workloads on existing hardware and fabric.
  • Usability and performance improvements for z/OS FICON Discovery and Auto Configuration (zDAC), including discovery of directly attached devices.
  • Serial Coupling Facility structure rebuild processing, designed to help improve performance and availability by rebuilding coupling facility structures more quickly and in priority order.
  • 100-way symmetric multiprocessing (SMP) support in a single LPAR on IBM zEC12 or zBC12 systems.  Support for an architectural limit of 4 TB of real memory per LPAR.
  • Support for 2 GB pages is provided on zEC12 and zBC12 systems.  This feature is designed to reduce memory management overhead and improve overall system performance by enabling middleware to use 2 GB pages.  These improvements are expected due to improved effective translation lookaside buffer (TLB) coverage and a reduction in the number of steps the system must perform to translate a 2 GB page virtual address.
  • Capacity Provisioning is designed to provide support for manual and policy-based management of Defined Capacity and Group Capacity.  This function broadens the range of automatic, policy-based responses available to help manage capacity shortage conditions when WLM cannot meet your workload policy goals.

There are numerous new and enhanced functions delivered with z/OS 2.1, too numerous to mention, but categorised as Quality Of Service, Availability, Networking, Security, Data Usability, Integrity, Systems Management, Application Development, Simplification & Usability, International Standards Compliance, et al.

So let’s not forget, this foundation and support for an IT infrastructure and its supporting eco (software) system is in one scalable, secure and “zero” downtime environment!

So maybe for us open-minded and enlightened generation of parents (oops, I forgot, Grandparents for us Dinosaur Mainframe folk!) that can now “access” children’s stories, even if it’s in the form of a CGI animated movie, maybe we can be dispassionate enough to consider all platforms, Distributed and Mainframe for our evolving business and associated IT requirements. 

So you decide, can it be Cloudy With A Chance Of Mainframe?  To overlook such an option, might be an oversight, just as overlooking the abundance of human stories, classified as children’s books or not…

Mainframe ISV Software: Is Continuous Product Improvement Always Evident?

Ken Venturi once said “I don’t believe you have to be better than everybody else.  I believe you have to be better than you ever thought you could be”.

Wouldn’t it be great if every CTO and/or Product manager had this same philosophy for their Mainframe software solution?  One such example I have experienced over the years is (E)JES from Phoenix Software International (PSI).  Of course it’s really important to have Day 1 support for the latest release of Operating System, z/OS 2.1 being the latest example, but what about actually exploiting the latest functionality available with the latest zSeries Mainframe Enterprise Servers and z/OS Operating Systems?

To drive maximum bang from you’re your buck, optimal performance and robust cost optimization can only be possible by recognizing and exploiting the latest Mainframe function ASAP, as and when appropriate.  Furthermore, listening to your customers, analysing their feedback, actively participating in User Organizations such as SHARE, and so on, will all help in continuous product development and innovation.

Here are some of the reasons why (E)JES has succeeded over a 30+ year period, recognizing and exploiting new z/OS function, as and when the updated z/OS is released for General Availability (GA).  Even today, with Version 5.3 supporting z/OS 2.1 as of day 1, (E)JES continues to offer value-added function for the seasoned, inexperienced and in fact, all IBM Mainframe technicians:

  • 64-bit performance optimizations (I.E. MEMLIMIT: above-the-bar) for both (E)JES client and server components, safeguarding minimal z/OS resource usage.
  • Nearly all (E)JES JES subsystem processing routines are eligible for zIIP redirection, delivering software cost savings for all (E)JES users.  Sub-Capacity System z processor users experience improved (E)JES performance because zIIP engines always run at full speed.  This behaviour differs from that of General Purpose CPs, “throttled” with Sub-Capacity deployments.
  • (E)JES code executes faster via its inbuilt High Performance Routine (HPR) facility, specifically developed to make (E)JES code execute faster while accessing data in JES control blocks.  HPRs have a shorter instruction path length than previous coding techniques, avoiding delays in modern z Series CPU instruction pipelines.
  • If High Performance FICON (zHPF) is available, (E)JES uses Transport Mode channel programs for JES Spool I/O.  When zHPF is not available, or when a CAS server performs I/O against the global data set, (E)JES uses the highest-performing Command Mode channel programs currently available.  These channel programs perform I/O significantly faster than “ordinary” channel programs.
  • The use of 24-bit (captured) UCBs puts a strain on the 24-bit virtual storage resource.  The use of ordinary (non-extended) TIOT entries puts a limit on the total number of allocations that can exist simultaneously in an address space.  (E)JES supports and uses 31-bit (uncaptured) UCBs and the extended TIOT (XTIOT) function (I.E. NON_VSAM_XTIOT=YES in DEVSUPxx PARMLIB)
  • (E)JES supports placement of JES spool data sets in the cylinder-managed area of an Extended Address Volume (EAV).  Of course, as of z/OS 1.12, EAV increases 3390 DASD capacity to ~1 TB.
  • (E)JES Pattern Utility Matching uses the SRST hardware instruction.  Empirical measurements show this technique is far faster on modern System z processors than alternatives such as the TRT instruction or “brute force” matching techniques using CLI/CLC.

One of the primary benefits of upgrading IBM z/OS software is the overall system performance benefit and associated cost reduction, but of course, IBM can only deliver the function and ability, while it’s incumbent upon the ISV community to upgrade their software products accordingly.  A key goal for any good ISV software product is to try to provide a value-add in the area of performance.  This has been one of the primary areas of focus for (E)JES since its introduction in 1978. 

Most spool display and management products tend to rely on the most resource-intensive interface available, namely the JES subsystem provided SSI 80.  (E)JES benchmarking tests against the most readily-available JES SSI 80 exploiters demonstrates significant CPU savings when deploying (E)JES.

Software products also need to deliver continuous improvements with regard to usability, presentation and in-built function, increasing user and system administrator productivity.  Without doubt, optimization encompasses not just hardware, but software, services, systems management disciplines and “best practices” that tie it all together.  Here are some of the usability enhancements that (E)JES has incorporated:

  • ISPF users running a 3270 emulator on a programmable workstation can now search IBM Eclipse-based InfoCenters via (E)JES.  Although (E)JES fully supports BookManager format documentation, BookManager READ/MVS is now obsolete, beginning with z/OS 2.1, BookManager softcopy books are no longer delivered by IBM.  IBM has stated that InfoCenters, and eventually KnowledgeCenters, are their strategic direction for online documentation.
  • (E)JES Web is a new, browser-based interface to (E)JES.  The associated RESTful API delivering this web enabled technology provides a framework for the creation of Eclipse plug-ins, mobile applications, and other web services clients.  This facility will provide a “rapid learning” type facility for users (E)JES users, both new and old that might be uncomfortable navigating traditional 3270 interfaces.
  • (E)JES provides a Java Application Programming Interface (API), complementing other in-built APIs for REXX and procedural languages.  By using an (E)JES API, a user can harness the versatility of their preferred programming language to interface and interact with (E)JES.  This support provides an interface to deliver nearly all of the capabilities available to an interactive (E)JES user.
  • (E)JES incorporates context sensitive help function, with point-and-shoot/pop-up dialogs, helping educate users on (E)JES, JES and z/OS while they work.  Users can get pop-up explanations of columns, input choices for unprotected fields, and a list of line commands.  Smart pop-ups explain the contents of certain columns, such as system abend codes.

The latest (E)JES Release Information Manual eloquently details the product enhancements over the last 5 releases or so, providing a good Product Roadmap reference point.

So, whether the ISV software product you deploy has been available for several years or several decades, do you safeguard maximum business benefit for optimal cost by considering:

  • Does the ISV deploy the latest zSeries server (I.E. zBC12, zEC12) for software interoperability and full hardware function exploitation; or an emulation (I.E. zPDT) technique?
  • Does the ISV deliver value-added z/OS related function on Day 1 or even within a year of the latest z/OS release?
  • Does the ISV deliver meaningful function to assist your users deploy said function, while simplifying environment management for system administrators?
  • Does your ISV product optimize cost, with Sub-Capacity pricing in MSU increments, aggregated MSU costs for your entire zSeries Mainframe environment, as opposed to specific workloads (E.g. CPC’s, LPAR’s, et al)?
  • Does your ISV product optimize cost by offloading the majority of its CPU function to zIIP specialty engines, which run at maximum speed, and where software “runs for free”?

Of course, only you can ask and potentially answer these questions during your day-to-day activities of maintaining currency and optimal performance for your Mainframe software portfolio.

Sometimes the hardest questions anybody can ask are the questions they ask themselves, which are never rhetorical questions!  Extracted verbatim from the latest (E)JES Release Information Manual:

Team (E)JES took advantage of the Phoenix Software International zHISR performance analysis product to discover performance “hot spots” in  the (E)JES product.  Sometimes the simplest, least conspicuous piece of code turns out to be a major CPU contributor.  See below for some of the most embarrassing “surprise” hot spots we discovered using zHISR in a z/OS 2.1 LPAR:

  • Over 30% of the CPU used during a Spool Data Browse FIND operation, against a multi-million-line SYSOUT in JES2, turned out to be code that was clearing a record buffer to blanks using MVCL.  This clearing code was eliminated and some minor adjustments were made in other code to compensate for this change.
  • 27% of the CPU used to produce the Activity display in JES2 turned out to be in a routine that manages an internal resource called the “Job Positions Table.”  The algorithm was improved (to work more like its JES3 counterpart) and that routine is no longer a significant CPU contributor.
  • 9% of (E)JES session start-up was a 26-year-old “brute force” prime number generator used to compute the size of a hash table.  That code was totally reworked and now accounts for approximately .02% of session start-up CPU.
  • A 6% performance penalty was observed when sorting a tabular display with a moderate number of rows. The hot spot turned out to be the code that cleared the work area for the sort service to zeros (another MVCL). This overhead was reduced to .04%.

Mea culpa and humility, never a bad thing, but you have to be honest with yourself and ask yourself the right questions!  So going back full circle and quoting Ken Venturi once again, “I don’t believe you have to be better than everybody else.  I believe you have to be better than you ever thought you could be”.  You must draw your own conclusions as to whether such an observation applies to the (E)JES team at Phoenix Software International (PSI)…

Why not ask them yourself?  Ed Jaffe, the (E)JES CTO will be available at the forthcoming UK GSE Annual Conference, 5-6 November 2013, speaking about (E)JES System Management Software: More With Less For Less, For The z/OS Mainframe and z/OS 2.1 User Experiences.