The Open Systems Adapter (OSA): Delivering ~25 Years IBM Mainframe IP Connectivity

Recently in my day-to-day activities I encountered a 3172 controller and was reminded of my first such encounter, back in 1992.  This got me thinking; 25 years of IBM Mainframe IP connectivity!  The IBM 3172 Interconnect Controller allowed LAN-to-Mainframe interconnection and was the pioneering technology allowing IP data off-load activities.  Historically Mainframe data transfer operations, namely CCW I/O were dependant on a physical channel, where the 3172 was a stepping stone to the Open Systems Adapter (OSA) card in 1994, quickly superseded by the OSA-2 card in 1995.  From a performance viewpoint, the OSA/OSA-2 cards matched maximum ESCON speeds of 17 MB/S.

However, the introduction of the OSA-Express technology in 1999 dramatically increased throughput to ~ 333 MB/S.  The OSA-Express technology bypasses CCW channel-based I/O processing, connecting directly to the Self-Timed Inter-connect (STI) bus of Generation 6 (Retrofit to Generation 5) S/390 Mainframes.  Data is transferred directly to or from the high speed Mainframe memory OSA-Express adapter I/O port, across the STI bus, with no intervening components or processing to slow down the data interchange.  This bus-based I/O, a first for IBM Mainframe computing, significantly increases data transfer speeds, eliminating inefficiencies associated with intermediary components.

Additionally, IBM developed a totally new I/O scheme for the OSA-Express adapter. Queued Direct I/O (QDIO) is a highly optimized data queuing-based data interchange mechanism, leveraging from the message queuing expertise IBM acquired with their multi-platform MQSeries middleware solution.  The QDIO-specific S/390 hardware instruction for G5/G6 machines, delivered an application to-OSA signalling scheme capable of handling the high-volume, multimedia data transfer requirements of 21st Century web applications.  Where might we be without the 3172 Interconnect Controller and the MQSeries messaging solution?

Since OSA-Express2 the channel types supported have largely remain unchanged:

  • OSD: Queued Direct I/O (QDIO), a highly efficient data transfer architecture, dramatically improving IP data transfer speed and efficiency.
  • OSE: Non-QDIO, sets the OSA-Express card to function in non-QDIO mode, bypassing all of the advanced QDIO functions.
  • OSC: OSA-ICC, available with IBM Mainframes supporting GbE, eliminating the requirement for an external console controller, simplifying HMC and to the z/OS system console access, while introducing TN3270E connectivity.
  • OSN: OSA for NCP, Open Systems Adapter for NCP, eradicates 3745/3746 Front End Processor Network Control Program (NCP) running under IBM Communication Controller for Linux (CCL) requirements.  Superseded by:
  • OSM: (OSA-Express for zManager), provides Intranode Management Network (INMN) connectivity from System z to zManager functions.
  • OSX: (OSA-Express for zBX), provides connectivity and access control to the IntraEnsemble Data Network (IEDN) to the Unified Resource Manager (URM) function.

Returning to my original observation, it’s sometimes hard to reconcile finding a ~25 year old 3172 Controller in a Data Centre environment, preparing for a z14 upgrade!  In conjunction with the z14 announcement, OSA-Express6S promised an Ethernet technology refresh for use in the PCIe I/O drawer and continues to be supported by the 16 GBps PCIe Gen3 host bus.  The 1000BASE-T Ethernet feature supporting copper connectivity, in addition to 10 Gigabit Ethernet (10 GbE) and Gigabit Ethernet (GbE) for single-mode and multi-mode fibre optic environments.  The OSA-Express6S 1000BASE-T feature will be the last generation to support 100 Mbps link speed connections.  Future OSA-Express 1000BASE-T features will only support 1 Gbps link speed operation.

Of course, OSA-Express technology exposes the IBM Z Mainframe to the same security challenges as any other server node on the IP network, and as well as talking about Pervasive Encryption with this customer, we also talked about the increased security features of the OSA-Express6S adapter:

  • OSA-ICC Support for Secure Sockets Layer: when configured as an integrated console controller CHPID type (OSC) on the z14, supports the configuration and enablement of secure connections using the Transport Layer Security (TLS) protocol versions 1.0, 1.1 and 1.2. Server-side authentication is supported using either a self-signed certificate or a customer supplied certificate, which can be signed by a customer-specified certificate authority.  The certificates used must have an RSA key length of 2048 bits, and must be signed by using SHA-256.  This function support negotiates a cipher suite of AES-128 for the session key.
  • Virtual Local Area Network (VLAN): takes advantage of the Institute of Electrical and Electronics Engineers (IEEE) 802.q standard for virtual bridged LANs. VLANs allow easier administration of logical groups of stations that communicate as though they were on the same LAN.  In the virtualized environment of the IBM Z server, TCP/IP stacks can exist, potentially sharing OSA-Express features.  VLAN provides a greater degree of isolation by allowing contact with a server from only the set of stations that comprise the VLAN.
  • QDIO Data Connection Isolation: provides a mechanism for security regulatory compliance (E.g. HIPPA) for network isolation between the instances that share physical network connectivity, as per installation defined security zone boundaries. A mechanism to isolate a QDIO data connection on an OSA port, by forcing traffic to flow to the external network.  This feature safeguards that all communication flows only between an operating system and the external network.  This feature is provided with a granularity of implementation flexibility for both the z/VM and z/OS operating systems.

As always, the single-footprint capability of an IBM Z server must be considered. From a base architectural OSA design viewpoint, OSA supports 640 TCP/IP stacks or connections per dedicated CHPID, or 640 total stacks across multiple LPARs using a shared or spanned CHPID.  Obviously this allows the IBM Mainframe user to support more Linux images.  Of course, this is a very important consideration when considering the latest z13 and z14 servers for Distributed Systems workload consolidation.

In conclusion, never under estimate the value of the OSA-Express adapter in your organization and its role in transitioning the IBM Mainframe from a closed proprietary environment in the early 1990’s, to just another node on the IP network, from the mid-1990’s to the present day.  As per any other major technology for the IBM Z server, the OSA-Express adapter has evolved to provide the requisite capacity, performance, resilience and security attributes expected for an Enterprise Class workload.  Finally, let’s not lose sight of the technology commonality associated with OSA-Express and Crypto Express adapters; clearly, fundamental building blocks of Pervasive Encryption…

IBM z14: Pervasive Encryption & Container Pricing

On 17 July 2017 IBM announced the z14 server as “the next generation of the world’s most powerful transaction system, capable of running more than 12 billion encrypted transactions per day.  The new system also introduces a breakthrough encryption engine that, for the first time, makes it possible to pervasively encrypt data associated with any application, cloud service or database all the time”.

At first glance, a cursory review of the z14 announcement might just appear as another server upgrade release, but that could be a costly mistake by the reader.  There are always subtle nuisances in any technology announcement, while finding them and applying them to your own business can sometimes be a challenge.  In this particular instance, perhaps one might consider “Persuasive Encryption & Contained Pricing”…

When IBM releases a new generation of z Systems server, many of us look to the “feeds and speeds” data and ponder how that might influence our performance and capacity profiles.  IBM state the average z14 speed compared with a z13 increase by ~10% for 6-way servers and larger.  As per usual, there are software Technology Transition Offering (TTO) discounts ranging from 6% to 21% for z14 only sites.  However, in these times where workload profiles are rapidly changing and evolving, it’s sometimes easy to overlook that IBM have to consider the holistic position of the IBM Z world.  Quite simply, IBM has many divisions, Hardware, Software, Services, et al.  Therefore there has to be interaction between the hardware and software divisions and in this instance, IBM have delivered a z14 server that is security focussed, with their Pervasive Encryption functionality.

Pervasive Encryption provides a simple and transparent approach for z Systems security, enabling the highest levels of data encryption for all data usage scenarios, for example:

  • Processing: When retrieved from files and processed by applications
  • In Flight: When being transmitted over internal and external networks
  • At Rest: When stored in database structures or files
  • In Store: When stored in magnetic storage media

Pervasive Encryption simplifies and reduces costs associated when protecting data by policy (I.E. Subset) or En Masse (I.E. All Of The Data, All Of the Time), achieving compliance mandates.  When considering the EU GDPR (European Union General Data Protection Regulation) compliance mandate, companies must notify relevant parties within 72 hours of first having become aware of a personal data breach.  Additionally organizations can be fined up to 4% of annual global turnover or €20 Million (whichever is greater), for any GDPR breach unless they can demonstrate that data was encrypted and keys were protected.

To facilitate this new approach for encryption, the IBM z14 infrastructure incorporates several new capabilities integrated throughout the technology stack, including Hardware, Operating System and Middleware.  Integrated CPU chip cryptographic acceleration is enhanced, delivering ~600% increased performance when compared with its z13 predecessor and ~20 times faster than competitive server platforms.  File and data set encryption is optimized within the Operating Systems (I.E. z/OS), safeguarding transparent and optimized encryption, not impacting application functionality or performance.  Middleware software subsystems including DB2 and IMS leverage from these Pervasive Encryption techniques, safeguarding that High Availability databases can be transitioned to full encryption without stopping the database, application or subsystem.

Arguably IBM had to deliver this type of security functionality for its top tier z Systems customers, as inevitably they would be impacted by compliance mandates such as GDPR.  Conversely, the opportunity to address the majority of external hacking scenarios with one common approach is an attractive proposition.  However, as always, the devil is always in the detail, and given an impending deadline date of May 2018 for GDPR compliance, I wonder how many z Systems customers could implement the requisite z14 hardware and related Operating System (I.E. z/OS) and Subsystem (I.E. CICS, DB2, IMS, MQ, et al) .upgrades before this date?  From a bigger picture viewpoint, Pervasive Encryption does offer the requisite functionality to apply a generic end-to-end process for securing all data, especially Mission Critical data…

Previously we have considered the complexity of IBM z Systems pricing mechanisms and in theory, the z14 announcement tried to simplify some of these challenges by building upon and formalizing Container Pricing.  Container Pricing is intended to greatly simplify software pricing for qualified collocated workloads, whether collocated with other existing workloads on the same LPAR, deployed in a separate LPAR or across multiple LPARs.  Container pricing allows the specified workload to be separately priced based on a variety of metrics.  New approved z/OS workloads can be deployed collocated with other sub-capacity products (I.E. CICS, DB2, IMS, MQ, z/OS) without impacting cost profiles of existing workloads.

As per most new IBM z Systems pricing mechanisms of late, there is a commercial collaboration and exchange required between IBM and their customer.  Once a Container Pricing solution is agreed between IBM and their customer, for an agreed price, an IBM Sales order is initiated, triggering the creation of an Approved Solution ID.  The IBM provided solution ID is a 64-character string representing an approved workload with an entitled MSU capacity, representing a Full Capacity Pricing Container used for billing purposes.

Previously we considered the importance of WLM for managing z/OS workloads and its interaction with soft-capping, and this is reinforced with this latest IBM Container Pricing mechanism.  The z/OS Workload Manager (WLM) enables Container Pricing using a resource classified as the Tenant Resource Group (TRG), defining the workload in terms of address spaces and independent enclaves.  The TRG, combined with a unique Approved Solution ID, represents the IBM approved solution.  As per standard SCRT processing, workload instrumentation data is collected, safeguarding that this workload profile does not directly impact the traditional peak LPAR Rolling Four-Hour Average (R4HA).  The TRG also allows the workload to be metered and optionally capped, independent of other workloads that are running collocated in the LPAR.

MSU utilization of the defined workload is recorded by WLM and RMF, subsequently processed by SCRT to subtract the solution MSU capacity from the LPAR R4HA.  The solution can then be priced independently, based on MSU resource consumed by the workload, or based upon other non-MSU values, specifically a Business Value Metric (E.g. Number of Payments).  Therefore Container Pricing is much simpler and much more flexible than previous IBM collocated workload mechanism, namely IWP and zCAP.

Container Pricing eliminates the requirement to commission specific new environments to optimize MLC pricing.  By deploying a standard IBM process framework, new workloads can be commissioned without impacting the R4HA of collocated workloads, being deployed as per business requirements, whether on the same LPAR, a separate LPAR, or dispersed across multiple LPAR structures.  Quite simply, the standard IBM process framework is the Approved Solution ID, associating the client based z/OS system environment to the associated IBM sales contract.

In this first iteration release associated with the z14 announcement, Container Pricing can be deployed in the following three solution based scenarios:

  • Application Development and Test Solution: Add up to 3 times more capacity to existing Development and Test environments without any additional monthly licensing costs; or create new LPAR environments with competitive pricing.
  • New Application Solution: Add new z/OS microservices or applications, priced individually without impacting the cost of other workloads on the same system.
  • Payments Pricing Solution: A single agreed value based price for software plus hardware or just software, via a number of payments processed metric, based on IBM Financial Transaction Manager (FTM) software.

IBM state z14 support for a maximum 2 million Docker containers in an associated maximum 32 TB memory configuration.  In conjunction with other I/O enhancements, IBM state a z14 performance increase of ~300%, when compared with its z13 predecessor.  Historically the IBM Z platform was never envisaged as being the ideal container platform.  However, its ability to seamlessly support z/OS and Linux, while the majority of mission critical Systems Of Record (SOR) data resides on IBM Z platforms, might just be a compelling case for microservices to be processed on the IBM Z platform, minimizing any data latency transfer.

Container Pricing for z/OS is somewhat analogous to the IBM Cloud Managed Services on z Systems pricing model (I.E. CPU consumption based).  Therefore, if monthly R4HA peak processing is driven by an OLTP application, or any other workload for that matter, any additional unused capacity in that specific SCRT reporting month can be allocated for no cost to other workloads.  Therefore z/OS customers will be able to take advantage of this approach, processing collocated microservices or applications for a zero or nominal cost.

County Multiplex Pricing (CMP) Observation: The z14 is the first new generation of IBM Z hardware since the introduction of the CMP pricing mechanism.  When a client first implements a Multiplex, IBM Z server eligibility cannot be older than two generations (I.E. N-2) prior to the most recently available server (I.E. N).  Therefore the General Availability (GA) of z14, classifies the z114 and z196 servers as previously eligible CMP machines.  IBM will provide a 3 Month grace period for CMP transition activities for these N-3 servers, namely z114 and z196.  Quite simply, the first client CMP invoice must be submitted within 90 days of the z14 GA date, namely 13 September 2017, no later than 1 January 2018.

In conclusion, Pervasive Encryption is an omnipresent z14 function integrated into every data lifecycle stage, which could easily be classified as Persuasive Encryption, simplifying the sometimes arduous process of classifying and managing mission-critical data.  As cybersecurity becomes an omnipresent clear and present danger, associated with impending and increasingly punitive compliance mandates such as GDPR, the realm of possibility exists to resolve this high profile corporate challenge once and for all.

Likewise, Container Pricing provides a much needed simple-to-use framework to drive MSU cost optimization for new workloads and could easily be classified as Contained Pricing.  The committed IBM Mainframe customer will upgrade their z13 server environment to z14, as part of their periodic technology refresh approach.  Arguably, those Mainframe customers who have been somewhat hesitant in upgrading from older technology Mainframe servers, might just have a compelling reason to upgrade their environments to z14, safeguarding cybersecurity challenges and evolving processes to contain z/OS MLC costs.

z Systems Software & Applications Currency: MQ Continuous & as a Service Delivery Models

In a rapidly-moving technology environment where DevOps is driving innovation for the rapid delivery of applications, are there any innovations for the related z Systems infrastructure (E.g. z/OS, CICS, DB2, IMS, MQ, WebSphere AS) that can deliver faster software and indeed firmware updates?

In April 2016, IBM announced MQ V9.0, delivering new and enhanced capabilities facilitating a Continuous Delivery and support model.  The traditional Long Term Support release offers the ubiquitous collection of aggregated fix-packs, applied to the delivered MQ V9.0 function.  The new Continuous Delivery release delivers both fixes and new functional enhancements as a set of modification-level updates, facilitating more rapid access to functional enhancements.

Form a terminology viewpoint, the Continuous Delivery (CD) support model, introduces new function and enhancements, made available by incremental updates within the same version and release.  Additionally, there will also be a Long Term Support (LTS) release available for deployments that require traditional medium-long term security and defect fixes only.  Some might classify such LTS fixes as Service Pack or Level Set patching.  The Continuous Delivery (CD) support approach delivers regular updates with a short-term periodic frequency for customers wanting to exploit the latest features and capabilities of MQ, without waiting for the next long term support release cycle.  In terms of timeframe, although there is no fixed time period associated with a CD or LTS release, typically CD is every few months, while LTS releases are every two years or so.  In actual IBM announcement terms, the latest MQ release was V9.0.3 in May 2017, meaning four MQ V9.0.n release activities in a ~13 Month period, approximately quarterly…

The benefits of this CD support model are obvious, for those organizations who consider themselves to be leading-edge or “amongst the first”, they can leverage from new function ASAP, with a modicum of confidence that the code has a good level of stability.  Those customers with a more cautious approach, can continue their ~2 year software upgrade cycle, applying the LTS release.  As always with software maintenance, there has never been a perfect approach.  Inevitably there will by High Impact or PERvasive (HIPER) and PTF-in Error (PE) PTF requirements, as software function stability is forever evolving.  Therefore, arguably those sites leveraging from the latest function have always been running from a Continuous Delivery software maintenance model; they just didn’t know when and how often!

Of all the major IBM z Systems subsystems to introduce this new software support model first, clearly the role of MQ dictates that for many reasons, primarily middleware and interoperability based, MQ needs a Continuous Delivery (CD) model.

At this stage, let’s remind ourselves of the important role that MQ plays in our IT infrastructures.  IBM MQ is a robust messaging middleware solution, simplifying and accelerating the integration of diverse applications and business data, typically distributed over multiple platforms and geographies.  MQ facilitates the assured, secure and reliable exchange of information between applications, systems, services, and files.  This exchange of information is achieved through the sending and receiving of message data through queues and topics, simplifying the creation and maintenance of business applications.  MQ delivers a universal messaging solution incorporating a broad set of offerings, satisfying enterprise requirements, in addition to providing 21st century connectivity for Mobile and the Internet of Things (IoT) devices.

Because of the centralized role that MQ plays, its pivotal role of interconnectivity might be hampered by the DevOps requirement of rapid application delivery, for both planned and unplanned business requirements.  Therefore even before the concept of MQ Continuous Delivery (CD) was announced in April 2016, there was already talk of MQ as a Service (MQaaS).

As per any major z Systems subsystem, traditionally IBM MQ was managed by a centralized messaging middleware team, collaborating with their Application, Database and Systems Management colleagues.  As per the DevOps methodology, this predictable and centralized approach, does no lend itself to rapid and agile Application Development.  Quite simply an environment management decentralization process is required, to satisfy the ever-increasing speed and diversity of application design and delivery requests.  By definition, MQ seamlessly interfaces with so many technologies, including but not limited to, Amazon Web Services, Docker, Google Cloud Platform, IBM Bluemix, JBoss, JRE, Microsoft Azure, Oracle Fusion Middleware, OpenStack, Salesforce, Spark, Ubuntu, et al.

The notional concept of MQ as a Service (MQaaS), delivers a capability to implement self-service portals, allowing Application Developers and their interconnected Line of Business (LOB) personnel to drive changes to the messaging ecosystem.  These changes might range from the creation or deletion of a messaging queue to the provision of a highly available and scalable topology for a new business application.  The DevOps and Application Lifecycle Management (ALM) philosophy dictates that the traditional centralized messaging middleware team must evolve, reducing human activity, by automating their best practices.  Therefore MQaaS can increase the speed at which the infrastructure team can deliver new MQ infrastructure to their Application Development community, while safeguarding the associated business requirements.

MQ provides a range of control commands and MQ Script Commands (MQSC) to support the creation and management of MQ resources using scripts.  Programmatic resource access is achievable via MQ Programmable Command Format (PCF) messages, once access to a queue manager has been established.  Therefore MQ administrators can create workflows that drive these processes, delivering a self-service interface.  Automation frameworks, such as UrbanCode Deploy (UCD), Chef and Puppet functions can be used to orchestrate administrative operations for MQ, to create and manage entire application or server environments.  Virtual machines, Docker containers, PureApplication System and the MQ Appliance itself can be used alongside automation frameworks to create a flexible and scalable ecosystem, for delivering the MQaaS infrastructure.

Integrating the MQ as a Service concept within your DevOps and Application Lifecycle Management process delivers the following benefits:

  • Development Agility: Devolving traditional MQ administration activities to Application and Line of Business personnel, allows them to directly provision or update the associated messaging resources. This optimizes the overall process, while DevOps processes facilitates the requisite IT organization communication.
  • Process Standardization: Enabling a self-service interface to Application and Line of Business personnel delivers a single entry point for messaging configuration changes. This common interface will leverage from consistent routines and workflows to deploy the necessary changes, enforcing standards and consistency.
  • Personnel Optimization: Self-service interfaces used by Application and Line of Business personnel allow them to focus on core application requirements, primarily messaging and Quality of Service (QoS) related. In such an environment, the background process of performing the change is arbitrary, timely change implementation is the most important factor.
  • Environment Interoperability: An intelligent and automated self-service interface allows for dynamic provisioning of systems and messaging resources for development and testing purposes. This automation can simplify the promotion of changes throughout the testing lifecycle (E.g. Development, Test, Quality Assurance, Production, et al).  As and when required, such automation can provide capacity-on-demand type changes, dynamically scaling an application, as and when required, to satisfy ever-changing and unpredictable business requirements.

In conclusion, DevOps is an all-encompassing framework and one must draw one’s own conclusions as to whether software update frequency timescales will reduce for major subsystems such as CICS, DB2, IMS and even the underlying Operating System itself, namely z/OS.  Conversely, the one major z Systems subsystem with so many interoperability touch points, namely MQ, is the obvious choice for applying DevOps techniques to underlying subsystem software.  For MQ, the use of a Continuous Delivery (CD) software support model safeguards that the latest new function and bug fix capability is delivered in a timely manner for those organizations striving for an agile environment.  Similarly, the consideration of devolving traditional MQ systems administration activities, via intelligent, automated and self-service processes to key Application and Line of Business personnel makes sense, evolving a pseudo MQaaS capability.

System z DevOps & Application Lifecycle Management (ALM) Integration: Evolution or Revolution?

From an IT viewpoint, seemingly the 2010’s decade will be dominated by the digital data explosion, primarily fuelled by Cloud, Mobile and Social Media data sources, while intelligent and timely if not real-time Analytics are required to process this vast and ever-growing data source.  Who could have imagined just a decade ago that the Mobile Phone, specifically Smartphone would be the de facto computing device, although some might say, only for a certain age demographic?  I’m not so sure, I encounter real-life and day-to-day evidence that a Smartphone or tablet can also empower the older generation to simplify their computer usage and access.  From a business perspective, Smartphones have allowed geographically dispersed citizens gain access to Banking facilities for the first time; Cloud allows countless opportunities for data sharing and number crunching for collaborative scientific, health, education and anything else a human being might conceive activities.  The realm of opportunity exists…

When thinking of the bigger picture, we somehow have to find a workable and seamless balance that will integrate the dawn of business computing from the 1960’s to these rapidly moving 21st Century requirements.  When considering which came first, the data or the application, I always think the answer is really simple; the data came first, but I have been wrong before!  What is without doubt, the initial requirement for a business application was to automate data processing and the associated medium-term waterfall (E.g. n-nn Months) application development process is now outdated.  As of 2017, today’s application needs to leverage from this vast and rich digital data source, to identify and leverage new business opportunities, increasingly unplanned and therefore rapid application delivery is required.  For example, previously I wrote about this subject matter in the zAPI: System z Deployment Into The API Economy blog entry.

From an IT perspective, one of the greatest achievements in the 21st Century is collaboration, whether technology based, leveraging from a truly interconnected (E.g. Internet Protocol/IP) heterogeneous computing environment, or personnel based, with IT teams collaborating in a more open and timely manner, primarily via DevOps.  This might be a better chicken and egg analogy; which came first, the data explosion or an IT ecosystem that allowed such a digital data explosion?

There are a plethora of modern-day application development tools that separate the underlying target deployment server from the actual application developer.  Put another way, today’s application developer ideally works from a GUI display via an Eclipse-based Integrated Development Environment (IDE) interface, creating code using rapid and agile development techniques.  From an IBM System z perspective, these platforms include Compuware Topaz Workbench, IBM Developer for z Systems (IDz AKA RDz) and Micro Focus Enterprise Developer, naming but a few.  Therefore when considering the DevOps framework, these excellent Eclipse-based IDE products provide solutions for the Dev part of the equation; but what about the Ops part?

In a collaborative world, where we all work together, from an Application Lifecycle Management (ALM) perspective, IT Operations are a key part of application delivery and management.  Put simply, once code has been created, it needs to be packaged (E.g. Compile, Link-Edit, et al), tested (E.g. Unit, Integration, System, Acceptance, Regression, et al) and implemented in a Production environment.  We now must consider the very important discipline of Source Code Management (SCM), where from a System z Mainframe perspective, common solutions are CA Endevor SCM, Compuware ISPW, IBM SCLM, Micro Focus ChangeMan ZMF, et al.  Once again, from a DevOps perspective, we somehow have to find a workable and seamless balance that will integrate the dawn of business computing from the 1960’s to these rapidly moving 21st Century requirements.  As previously discussed the Dev part of the DevOps framework is well-covered and straightforward, but perhaps the Ops part requires some more considered thought…

Recently Compuware have acquired ISPW (January 2016) to supplement their Topaz Workbench and Micro Focus acquired ChangeMan ZMF (May 2016) to complement their Micro Focus Enterprise Developer solution.  IBM IDz offers out-of-the-box integration for the IBM Rational Team Concert, CA Endevor SCM and IBM SCLM tools.  Clearly there is a significant difference between Source Code Management (SCM) for Distributed Systems when compared with the System z Mainframe, but today’s 21st century business application will inevitably involve interconnected platforms and so a consistent and seamless SCM process is required for accurate and timely application delivery.  In all likelihood a System z Mainframe user has been using their SCM solution for several decades, evolving processes around this solution, which was never designed for Distributed Systems SCM.  Hence the major System z Application Development ISV’s have acquired SCM products to supplement their core capability, but is it really that simple?  The simple answer is no!

Traditionally, for Application Development activities we deployed the Software Development Life Cycle (SDLC), limited to software development phases, including requirements, design, coding, testing, configuration, project management and change management.  Modern software development processes require real-time collaboration, access to centralized data repository, cross-tool and cross-project visibility, proactive project monitoring and reporting, to rapidly develop and deliver quality software.  This requirement is typically classified as Application Lifecycle Management (ALM).

The first iteration of ALM, namely ALM 1.0 was wholly unsuccessful.  Application Development teams were encouraged to consider the value of point solutions for task management, planning testing, requirements, release management, and other functions.  Therefore ALM 1.0 became just a set of tools, where the all too common question for the Application Development team was “what other tool can we use”!

ALM 2.0 or ALM 2.0+ can be considered as Integrated Application Lifecycle Management or Integrated ALM, where all the tools and their users are synchronized with each other throughout the application development stages.  This integration ensures that every team member knows the Who, What, When, and Why of any changes made during the development process, eradicating arduous, repetitive, manual and error prone activities.  The most important lesson for the DevOps team in a customer environment is to concentrate on the human perspective.  They should ask “how do we want our teams to work together and collaborate” as opposed to asking an Application Development ISV team, “what ALM tools do you have”.  Inevitably the focus will be ISV based, as opposed to customer based.  As per the recent Compuware and Micro Focus SCM acquisition history demonstrates, these tools by definition, were never fully integrated from their original inception…

If the customer DevOps teams collaborate and formulate how they want to work together, an ALM evolution can take place in a timely manner, maintaining investment in previous technologies, as and if required.  Conversely, a revolutionary approach is the most likely outcome for the System z Mainframe user, if looking to the ISV for a “turn-key” ALM solution.  By definition, an end-to-end and turn-key ALM solution from one ISV is not possible and in fact, not desirable!  Put another way, as a System z user, do you really want to write off several decades investment in an SCM solution, for another competitive solution, which will still require many other tools to provide the Integrated ALM capability you require?  As always, balance and compromise is the way forward…

If the ubiquitous System z Application Development ISV were to develop their first software product today, it would inevitably be a DevOps and ALM 2.0+ compatible product, allowing for full integration with all other Application Development tools, whether System z Mainframe or Distributed Systems orientated.  Of course that is not the reality.  It seems somewhat disingenuous that the System z Application Development ISV would ask a potential customer to write-off their several decades investment in a SCM technology, when said ISV has just acquired such a technology!  Once again, this is why the customer based Application Development teams must decide how they want to collaborate and what ALM and DevOps tools they want to use.

Seemingly a pragmatic solution is required, hence the ALM 2.0+ initiative.  If an ISV could develop an all-encompassing DevOps and ALM 2.0+ end-to-end Application Development solution for all IT platforms, they would probably become one of the most popular and biggest ISV’s in a short time period.  However, this still overlooks the existing tools that customer IT organizations have used for many years.  Hence, the pragmatic way forward is to build an open DevOps and ALM 2.0+ solution that will integrate with all other Application Development lifecycle tools, whether market place available, or not!  HPE Application Lifecycle Management (ALM) and Quality Centre (QC) is one such approach for Distributed Systems, but what about the System z Mainframe?

IKAN ALM is an ALM 2.0+ and DevOps architected solution that is vendor and platform agnostic.  Put another way, IKAN ALM can operate in single or multiple-vendor mode.  In all likelihood, single vendor mode is unlikely, as there are many efficient Application Development tools in the marketplace.  However, the single most compelling feature of IKAN ALM is its open framework and interoperability with other ALM technologies.  As previously stated, we might consider source code development as the Dev side of the DevOps framework.  IKAN ALM will interface with these technologies, while its core functionality concentrates on the Ops side of the DevOps framework.  Therefore from an Application Lifecycle Management (ALM) viewpoint, the IKAN ALM solution starts where versioning systems end, with an objective of optimizing the entire software engineering process.

IKAN ALM offers a uniquely integrated web-based Application Lifecycle Management platform for both Agile and traditional software development teams.  It combines Continuous Integration and Lifecycle Management, offering a single point of control, delivering support for build and deploy processes, approval processes, release management and software lifecycles.  From a pragmatic and common-sense viewpoint, typically organizations want to continue working with their preferred tools in their preferred environment.  Being ALM 2.0+ compliant, IKAN ALM fully integrates with any versioning tool and any issue tracking tool, providing ALM reports across repositories.  Therefore IKAN ALM offers an evolutionary approach, allowing an organization to leverage from timely ALM benefits, without risk and without the need for displacing any existing technologies.  Over time, should the organization wish to displace older legacy ALM software products, they could so, leveraging from the stand-alone or multiple vendor flexibility of the IKAN ALM solution.

IKAN ALM incorporates ready to use solutions and processes for multiple environments.  These solutions include ALM 2.0+ compliant processes and the necessary scripts to automate the integration with a specific environment, including but not limited to CA Endevor (SCM), CollabNet, HPE ALM/Quality Centre (QC), Oracle Warehouse Builder (OWB), SAP, et al.

The IKAN ALM central server is an open framework web application responsible for User Authentication and Authorization, User Interface Processing, Distributed Version Repository Management and Scheduling Code Builds.  The IKAN ALM agents perform the application build and install functions.

The data repository is an open central database where all administrative data and the audit trail history are stored.  IKAN ALM communicates with the repository using standard JDBC interfaces.  The required JDBC drivers are installed along with the product.  The repository can reside in any RDBMS system, including IBM DB2/UDB, Informix, Microsoft SQL Server, MySQL, Oracle, et al.

Source code is always stored in a Version Control Repository.  IKAN ALM integrates with all the typical versioning systems including Apache Subversion, CVS, Git, Microsoft Visual SourceSafe (VSS), IBM Rational ClearCase (UCM & LT), Serena PVCS Version Manager, et al.  The choice of IDE often drives the choice of the Version Control System (VCS), where organizations can have more than one operational VCS.  IKAN ALM is a solution that provides a unique process control over all versioning systems present in the organization.  IKAN ALM stores each build result within its central server filesystem, labelling the source accordingly in the associated versioning system, guaranteeing a correct source-build relationship.

IKAN ALM safeguards Authentication & Authorization interacting with the organizations security deployment (E.g. Active Directory, LDAP, Kerberos, et al) via the Java Authentication and Authorization Service (JAAS) interface.

IKAN ALM audits any changes (E.g. Who, What, Why, When, Approver, et al), orchestrating the various components and phases of Application Lifecycle Management, delivering an automated workflow to drive a continuous flow of activity throughout the development lifecycle, efficiently coordinating and optimizing application development changes.

In an environment with ever increasing mandatory regulatory compliance requirements, IKAN ALM simplifies the processes for delivering such compliance.  As per the IKAN ALM Build, Deploy, Lifecycle and Approval Management framework, compliance for industry standard regulations (E.g. CMM, ITIL, Sarbanes-Oxley, Six Sigma, et al) is delivered via a reliable, repeatable and auditable process throughout the development life cycle.

Clearly any IT organization can benefit from a fully integrated ALM 2.0+ solution, by enforcing and safeguarding the ALM process is repeatable, reliable and documented.  Regardless of the development team headcount size, ALM releases key people from repetitive and less interesting tasks, allowing them to focus on delivering today’s Analytics based, Cloud, Mobile and Social applications.  A fully integrated ALM 2.0+ solution such as IKAN ALM allows for simplified legacy environment modernization, while simultaneously allowing for experimentation and improvement of all environments alike, both legacy and new.

In conclusion, savvy organizations will safeguard that their Application Development and Operations teams collaborate as per the DevOps framework and decide how they want to implement processes for their environment and more importantly, their business.  This focus should avoid any notion of asking the ubiquitous Application Development ISV, “which tools we should use”!  Similarly, recognizing the integration foundation of ALM 2.0+ will eliminate any notion to displace existing technologies and processes, unless absolutely required.  The need for agile, rapid and quality source code development and delivery is obvious, as is the related solution, which is inevitably pragmatic, evolutionary and multiple vendor tool based.

Optimize Your System z ROI with z Operational Insights (zOI)

Hopefully all System z users are aware of the Monthly Licence Charge (MLC) pricing mechanisms, where a recurring charge applies each month.  This charge includes product usage rights and IBM product support.  If only it was that simple!  We then encounter the “Alphabet Soup” of acronyms, related to the various and arguably too numerous MLC pricing mechanism options.  Some might say that 13 is an unlucky number and in this case, a System z pricing specialist would need to know and understand each of the 13 pricing mechanisms in depth, safeguarding the lowest software pricing for their organization!  Perhaps we could apply the unlucky word to such a resource.  In alphabetical order, the 13 MLC pricing options are AWLC, AEWLC, CMLC, EWLC, MWLC, MzNALC, PSLC, SALC, S/390 Usage Pricing, ULC, WLC, zELC and zNALC!  These mechanisms are commercial considerations, but what about the technical perspective?

Of course, System z Mainframe CPU resource usage is measured in MSU metrics, where the usage of Sub-Capacity allows System z Mainframe users to submit SCRT reports, incorporating Monthly License Charges (MLC) and IPLA software maintenance, namely Subscription and Support (S&S).  We then must consider the Rolling 4-Hour Average (R4HA) and how best to optimize MSU accordingly.  At this juncture, we then need to consider how we measure the R4HA itself, in terms of performance tuning, so we can minimize the R4HA MSU usage, to optimize cost, without impacting Production if not overall system performance.

Finally, we then have to consider that WLC has a ~17-year longevity, having been announced in October 2000 and in that time IBM have also introduced hardware features to assist in MSU optimization.  These hardware features include zIIP, zAAP, IFL, while there are other influencing factors, such as HyperDispatch, WLM, Relative Nest Intensity (RNI), naming but a few!  The Alphabet Soup continues…

In summary, since the introduction of WLC in Q4 2000, the challenge for the System z user is significant.  They must collect the requisite instrumentation data, perform predictive modelling and fully comprehend the impact of the current 13 MLC pricing mechanisms and their interaction with the ever-evolving System z CPU chip!  In the absence of such a simple to use reporting capability from IBM, there are a plethora of 3rd party ISV solutions, which generally are overly complex and require numerous products, more often than not, from several ISV’s.  These software solutions process the instrumentation data, generating the requisite metrics that allows an informed decision making process.

Bottom Line: This is way too complex and are there any Green Shoots of an alternative option?  Are there any easy-to-use data analytics based options for reducing MSU usage and optimizing CPU resources, which can then be incorporated into any WLC/MLC pricing considerations?

In February 2016 IBM launched their z Operational Insights (zOI) offering, as a new open beta cloud-based service that analyses your System z monitoring data.  The zOI objective is to simplify the identification of System z inefficiencies, while identifying savings options with associated implementation recommendations. At this juncture, zOI still has a free edition available, but as of September 2016, it also has a full paid version with additional functionality.

Currently zOI is limited to the CICS subsystem, incorporating the following functions:

  • CICS Abend Analysis Report: Highlights the top 10 types of abend and the top 10 most abend transactions for your CICS workload from a frequency viewpoint. The resulting output classifies which CICS transactions might abend and as a consequence, waste processor time.  Of course, the System z Mainframe user will have to fix the underlying reason for the CICS abend!
  • CICS Java Offload Report: Highlights any transaction processing workload eligible for IBM z Systems Integrated Information Processor (zIIP) offload. The resulting output delivers three categories for consideration.  #1; % of existing workload that is eligible for offload, but ran on a General Purpose CP.  #2; % of workload being offloaded to zIIP.  #3; % of workload that cannot be transferred to a zIIP.
  • CICS Threadsafe Report: Highlights threadsafe eligible CICS transactions, calculating the switch count from the CICS Quasi Reentrant Task Control Block (QR TCB) per transaction and associated CPU cost. The resulting output identifies potential CPU savings by making programs threadsafe, with the associated CICS subsystem changes.
  • CICS Region CPU Constraint: Highlights CPU constrained regions. CPU constrained CICS regions have reduced performance, lower throughput and slower transaction response, impacting business performance (I.E. SLA, KPI).  From a high-level viewpoint, the resulting output classifies CICS Region performance to identify whether they’re LPAR or QR constrained, while suggesting possible remedial actions.

Clearly the potential of zOI is encouraging, being an easy-to-use solution that analyses instrumentation data, classifies the best options from a quick win basis, while providing recommendations for implementation.  Having been a recent user of this new technology myself, I would encourage each and every System z Mainframe user to try this no risk IBM z Operational Insights (zOI) software offering.

The evolution for all System z performance analysis software solutions is to build on the comprehensive analysis solutions that have evolved in the last ~20+ years, while incorporating intelligent analytics, to classify data in terms of “Biggest Impact”, identifying “Potential Savings”, evolving MIPS measurement, to BIPS (Biggest Impact Potential Savings) improvements!

IBM have also introduced a framework of IT Operations Analytics Solutions for z Systems.  This suite of interconnected products includes zOI, IBM Operations Analytics for z Systems, IBM Common Data Provider for z/OS and IBM Advanced Workload Analysis Reporter (IBM zAware).  Of course, if we lived in a perfect world, without a ~20 year MLC and WLC longevity, this might be the foundation for all of our System z CPU resource usage analysis.  Clearly this is not the case for the majority of System z Mainframe customers, but zOI does offer something different, with zero impact, both from a system impact and existing software interoperability viewpoint.

Bottom Line: Optimize Your System z ROI via zOI, Evolving From MIPS Measurement to BIPS Improvements!

System z: I/O Interoperability Evolution – From Bus & Tag to FICON

Since the introduction of the S/360 Mainframe in 1964 there has been a gradual evolution of I/O connectivity that has taken us from copper Bus & Tag to fibre ESCON and now FICON channels.  Obviously during this ~50 year period there have been exponentially more releases of Mainframe server and indeed Operating System.  In this timeframe there have been 2 significant I/O technology milestones.  Firstly, in 1990, ESCON was part of the significant S/390 announcement (MVS/ESA), where migration to ESCON was a great benefit, if only for replacing the heavy and big copper Bus & Tag channels.  Secondly, even though FICON was released in the late 1990’s, in 2009 IBM announced that the z10 would be the last Mainframe server to support greater than 240 native ESCON channels.  Similarly IBM declared that the last zEnterprise server to support ESCON channels are the z196 and z114 servers.  Each of these major I/O evolutions required a migration philosophy and not every I/O device would be upgraded to support either native ESCON of FICON channels.  How did customers achieve these mandatory I/O upgrades to safeguard IBM Mainframe Server and associated Operating System longevity?

In 2009 it was estimated ~20% of all Mainframe customers were using ESCON only I/O infrastructures, while only ~20% of all Mainframe customers were deploying a FICON only infrastructure.  Similarly ~33% of z9 and z10 systems were shipped with ESCON CVC (Block Multiplexor) and CBY (Byte Multiplexor) channels defined, while ~75% of all Mainframe Servers had native ESCON (CNC) capability.  From a dispassionate viewpoint, clearly the migration from ESCON to FICON was going to be a significant challenge, while even in this timeframe, there was still use of Bus & Tag channels…

One of the major strengths of the IBM Mainframe ecosystem is the partner network, primarily software (ISV) based, but with some significant hardware (IHV) providers.  From a channel switch viewpoint, we will all be familiar with Brocade, Cisco and McData, where Brocade acquired McData in 2006.  However, from a channel protocol conversion viewpoint, IBM worked with Optica Technologies, to deliver a solution that would allow the support for ESCON and Bus & Tag channels to the FICON only zBC12/zEC12 and future Mainframe servers (I.E. z13, z13s).  Somewhat analogous to the smartphone where the user doesn’t necessarily know that an ARM processor might be delivering CPU power to their phone, sometimes even seasoned Mainframe professionals might inadvertently overlook that the Optica Technologies Prizm solution has been or indeed is still deployed in their System z Data Centre…

When IBM work with a partner from an I/O connectivity viewpoint, clearly IBM have to safeguard that said connectivity has the highest interoperability capability with bulletproof data exchange attributes.  Sometimes we might take this for granted with the ubiquitous disk and tape subsystem suppliers (I.E. EMC, HDS, IBM, Oracle), but for FICON conversion support, Optica Technologies was a collaborative partner for IBM.  Ultimately the IBM Hardware Systems Assurance labs deploy their proprietary System Assurance Kernel (SAK) processes to safeguard I/O subsystem interoperability for their System z Mainframe servers.  Asking that rhetorical question; when was the last time you asked your IHV for site of their System Assurance Kernel (SAK) exit report from their collaboration with IBM Hardware Systems Assurance labs for their I/O subsystem you’re considering or deploying?  In conclusion, the SAK compliant, elegant, simple and competitively priced Prizm solution allowed the migration of tens if not hundreds of thousands of ESCON connections in thousands of Mainframe data centres globally!

With such a rich heritage of providing a valuable solution to the global IBM Mainframe install base, whether the smallest or largest, what would be next for Optica Technologies?  Obviously leveraging from their expertise in FICON channel support would be a good way forward.  With the recent acquisition of Bus-Tech by EMC and the eradication of the flexible MDL tapeless virtual tape offering, Optica Technologies are ideally placed to be that small, passionate and eminently qualified IHV to deliver a turnkey virtual tape solution for the smaller and indeed larger System z user.  The Optica Technologies zVT family leverages from the robust and heritage class Prizm technology, delivering an innovative family of virtual tape solutions.  The entry “Virtual Tape In A Box” zVT 3000i provides 2 FICON channel interfaces and 4 TB uncompressed internal RAID-5 disk space, seamlessly interfacing with all System z supported tape devices (I.E. 3490, 3590) and processes.  A single enterprise class zVT 5000-iNAS node delivers 2 FICON channel interfaces, NFS storage capacity from 8TB to 1PB in a single frame with standard deduplication, compression, replication and encryption features.  The zVT 5000-iNAS is available with multi-node configuration support for additional scalability and resiliency.  For those customers wishing to deploy their own choice of NFS or FC storage subsystem, the zVT 5000-FLEX allows such connectivity accordingly.

In conclusion, sometimes it’s all too easy to take some solutions for granted, when they actually delivered a tangible and arguably priceless solution in the evolution of your organizations System z Mainframe server journey from ESCON, if not Bus & Tag to FICON.  Perhaps the Prizm solution is one of these unsung products?  Therefore, the next time you’re reviewing the virtual tape market place, why wouldn’t you seriously consider Optica Technologies, given their rich heritage in FICON channel interoperability?  Given that IBM chose Optica Technologies as their strategic partner for ESCON to FICON migration, seemingly even IBM might have thought “nobody gets fired for choosing…”!

zHyperLink: Just Another System z DASD I/O Function Enhancement?

Over the last several decades or so the IBM Mainframe platform has delivered several new technologies that have dramatically improved the performance of disk (DASD) I/O performance.  Specifically the deployment of ESCON as the introduction to Fibre Optical channels, followed by EMIF for channel sharing and reduced I/O protocol, superseded by FICON and most recently zHPF.  All of these technologies have allowed for ever larger amounts of data to be processed by the System z server and the adoption of Geographically Dispersed Parallel Sysplex (GDPS) implementations for business continuity reasons.  Ultimately mission critical data and decisions are facilitated by applications and sub-second response times for these transactions is expected.  Some might say that we’re always running to stand still from a performance perspective when implementing the latest System z technologies?

In reality, today’s 21st Century mission-critical application is not just capturing and storing customer data, it’s doing so much more, attempting to make informed business decisions for a richer customer experience!  Historically a customer transaction would be on a one-to-one basis (E.g. ask for a balance query), whereas today, said transaction might generate more data for the customer, potentially offering them a new or enhanced product.  In theory, this informed and intelligent transaction processing delivers a richer experience for the customer and potentially new revenue opportunities for the business.

For several years IBM have integrated the Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative into their product offerings, recognising that a business transaction can originate from the cloud or a mobile device, potentially via a Social Media platform, require rich processing via real-time analytics, while requiring the highest levels of security.  Of course, one must draw one’s own conclusions, but maintaining sub-second or ultra-fast transaction response times, with this level of CAMSS complexity requires significant performance enhancements.  To deliver such ultra-fast response times requires the DASD I/O subsystem to maintain the highest levels of performance, aligned with the latest System z server platform…

In January 2017 IBM issued a Statement of Direction (SoD) and associated FAQ for their zHyperLink technology.  zHyperLink is a new short distance mainframe link technology designed for up to 10 times lower latency than zHPF.  zHyperLink is intended to accelerate DB2 for z/OS transaction processing and improve active log throughput.  IBM intends to deliver field upgradable support for zHyperLink on the existing IBM DS8880 storage subsystem.  zHyperLink technology is a new mainframe attach link.  It is the result of collaboration between DB2 for z/OS, the z/OS operating System, IBM System z servers and the DS8880 storage subsystem to deliver extreme low latency I/O access for DB2 for z/OS applications.  zHyperLink technology is intended to complement FICON technology, accelerating those I/O requests that are typically used for transaction processing.  These links are point-to-point connections between the System z CEC and the storage system and are limited to 150 meter distances.  These links do not impact the z Architecture 8 channel path limit.

From a DB2 I/O service performance perspective viewpoint, at short distances, a native FICON or zHPF originated I/O typically requires 300 Microseconds (μs) for a simple I/O operation.  The coupling facility for z Systems typically can read or write 4K of data in in under 8 Microseconds.  zHyperLink technology will provide a new short distance link from the mainframe to storage to read and write data up to 10 times faster than FICON or zHPF; reducing DB2 I/O service times to an anticipated 20-30 Microseconds.

In conclusion, with a promise of 10 times faster processing, as per its fibre optic channel technology predecessors, particularly EMIF and zHPF, zHyperLink is a revolutionary DASD I/O function and not just another DASD I/O subsystem function enhancement.  At this stage, the deployment of zHyperLink functionality is restricted to DB2 and the IBM DS8880 storage subsystem, while we eagerly await compatibility support from EMC and HDS accordingly.  Moreover, as per the evolution of zHPF, we hope for the inclusion of other I/O workloads to benefit from this paradigm changing I/O response time technology.

Finally, as always, the realm of possibility always exists for each and every System z DASD I/O subsystem to be monitored and tuned on a proactive and 24*7*365 basis.  Although all of this DASD I/O performance data has always been and still is captured by RMF (CMF) data, intelligent processing of this data requires an ever evolving Performance Management process and arguably an intelligent software solution (E.g. IntelliMagic Direction Disk Magic or Technical Storage Easy Analyze Disk Mainframe) to provide meaningful information and business decisions from ever increasing amounts of RMF (CMF) data.  In November 2016 ago I delivered the DASD I/O Performance Management Is Easy? session at the UK GSE Annual 2016 meeting accordingly…

System z Batch Optimization: Another Pipes Option?

Over the last 20 years or so I have encountered many sites looking for solutions to streamline their batch processing, only to find that sometimes they are their own worst enemy, because their cautious Change Management approach means they will not change or even recompile COBOL application source, unless absolutely forced to do so.  Sometimes VSAM file tuning is the answer, sometimes identifying the batch critical path, and on occasion, finding that key file or database that is processed on several or more occasions, which might benefit from parallelism is the answer.

BatchPipes was first introduced with MVS/ESA, allowing for data (E.g. BSAM, QSAM) to be piped between several jobs, allowing concurrent job processing, reducing the combined elapsed time of the associated job stream.  BatchPipes maintains a queue of records that are passed between a writer and reader.  The writer adds records to the back of a pipe queue and the reader processes them from the front.  This record level processing approach avoids any potential data set serialization issues when attempting to concurrently write and read records from the same physical data set.

The IBM BatchPipes feature has evolved somewhat and BMC have offered similar functionality with their initial Data Accelerator and Batch Accelerator offering, subsequently superseded by MainView Batch Optimizer Job Optimizer Pipes.  It seems patently obvious that to derive the parallelism benefit offered by BatchPipes, the reader and writer jobs need to be processed together.  For many, such a consideration has been an issue that has eliminated any notion of BatchPipes implementation.  Other considerations include a job failure in the BatchPipes process, where restart and recovery might include several jobs, as opposed to one.  Therefore widespread usage of BatchPipes has been seemingly limited.

The first step for any BatchPipes consideration is identifying whether there is any benefit.  IBM provide a BatchPipes SMF analysis tool to determine the estimated time savings and benefits that can be achieved with BatchPipes.  This tool reads SMF record types 14, 15 and 30 (Subtypes 1, 4 and 5) to analyse data set read and write activity, reconciling with the associated processing job.  As an observation, sometimes a data source might have a different data set name, be both permanent and temporary, while consuming significant I/O and CPU resource for processing.  Such data source reconciliation can easily be achieved, as the record and associated I/O count for such a data source is the same, for entire data set processing operations.  The analysis tool will identify the heavy I/O jobs and be a great starting point for any analysis activities.

UNIX users will be very familiar with the concept of pipes, where a UNIX pipeline is a sequence of processes chained together by their standard streams, where the output of each process (stdout) feeds directly as input (stdin) to the next one.  Wouldn’t it be good if there was a hybrid approach to BatchPipes, using a combination of standard z/OS and extended UNIX Systems Services (USS)?

With z/OS 2.2, JES2 introduced new functions to facilitate the scheduling of dependent batch jobs.  These functions comprise Job Execution Control (JEC) and can be utilized by making use of the new JOBGROUP and related Job Control Language (JCL) statements.  The primary goal of JEC is to provide an easy-to-use control mechanism, allowing complex batch jobs to be processed in inter-related constituent pieces.  Presuming that these constituent pieces can be run in parallel, improved throughput can be achieved by exploiting the concurrency functions provided by JEC.

UNIX named pipes can be used to pass data between simultaneously executing jobs, where the UNIX pipe can either be temporary or permanent.  One or more processes can connect to a UNIX named pipe, write to it, and read from it, as and when required.  Unlike most types of z/OS UNIX files, data written to a named pipe is always appended to existing data rather than replacing existing data.  Therefore, the STOR command is equivalent to the APPE command when UNIXFILETYPE=FIFO is configured.  This UNIX pipe facility, managed by the JES2 JEC functions can be leveraged to provide benefit for multiple step job processing and concurrent job processing, with the overall benefit of a reduction in overall batch stream elapsed time.

In conclusion, the advancement in JES2 JEC processing simplifies the batch scheduling and restart configuration processing, while the usage of UNIX named pipes leverages from existing z/OS USS functionality, safeguarding good performance using a tried and tested process.

Finally, returning full circle to my initial observation of Change Management considerations when performing batch optimization initiatives; recently I worked with a customer I visited in 2001, where they considered and dismissed BatchPipes Version 2.  We piloted this new UNIX pipe facility in Q4 2016, in readiness for their Year End processing, where they finally delivered a much needed ~2 Hour reduction in their ~9 Hour Critical Path Year End batch process.  Sometimes patience is a virtue, assisted by a slight implementation tweak…

IBM Doc Buddy: System z Mobile Problem Diagnosis

 

Having worked with the IBM Mainframe over the last several decades or more, I have always found a need for quick access to error messages, for obvious reasons.  In the 1980’s, I would have a paper copy of the “most common” MVS messages I was likely to encounter.  In the 1990’s, the adoption of optical media and the introduction of BookManager allowed the transport of many more messages, for numerous products on CD-ROM.  With the advent of higher speed Broadband, Wi-Fi and Mobile networks, I graduated to accessing BookManager on-line and eventually using the Mobile edition of LookAt.  So, isn’t it time for an IBM documentation app?

In August 2016, IBM introduced Doc Buddy, a no charge mobile application that enables retrieving z Systems message documentation and provides the following values:

  • Enables looking up message documentation without Internet connections after the initial download
  • Improves your information experience
  • Accelerates the time you spend in resolving problems
  • Includes links to the relevant product Support Portals and supports calling a contact from the app

IBM Doc Buddy, provides the message documentation of the products including z/OS, z/VM, TPF, DB2, CICS, IMS, ISPF, Tivoli OMEGAMON XE for Messaging for z/OS, IBM Service Management Unite, IBM Operations Analytics of z Systems, InfoSphere, et al.

Obviously to make this app local, you need to download the relevant manuals to your Mobile device and so this might generate storage capacity considerations.  However, once downloaded, this is a great tool for quick access to error messages.  There will be times where you can get a mobile signal to take a call, but no or limited access to mobile data or Wi-Fi services.

I have used this app on both iOS and Android and it works great.  At the time I downloaded this app, there were less than 100 downloads on both Apple and Google platforms.  Therefore, if you ever need to access System z error messages, give this app a go, as IBM have dropped support for LookAt.  It’s an awful lot easier than accessing paper manuals of firing up your PC to access a CD-ROM!

Are You Ready For z Systems Workload Pricing for Cloud (zWPC) for z/OS?

Recently IBM announced the z Systems Workload Pricing for Cloud (zWPC) for z/OS pricing mechanism, which can minimize the impact of new Public Cloud workload transactions on Sub-Capacity license charges.  Such benefits will be delivered where higher Public Cloud workload transaction volumes may cause a spike in machine utilization.  Of course, if this looks familiar and you have that feeling of déjà vu, this is a very similar mechanism to Mobile Workload Pricing (MWP)…

Put simply, zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms, for the usual MLC software suspects, namely z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public Cloud workloads are defined as transactions processed by named Public Cloud applications transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, et al.

As per MWP, SCRT calculates the R4HA for Public Cloud transaction GP MSU resource usage, subtracting 60% of those values from the traditional Sub-Capacity software eligible MSU metric, with LPAR granularity, for each and every reporting hour.  The software program values for the same hour are aggregated for all Sub-Capacity eligible LPARs, deriving an adjusted Sub-Capacity value for each reporting hour.  Therefore SCRT determines the billable MSU peak for a given MLC software program on a CPC using the adjusted MSU values.  As per MWP, this will only be of benefit, if the Public Cloud originated transactions generate a spike in the current R4HA.

One of the major challenges for implementing MWP was identifying those transactions eligible for consideration.  Very quickly IBM identified this challenge and offered a WorkLoad Manager (WLM) based solution, to simplify reporting for all concerned.  This WLM SPE (OA47042), introduced a new transaction level attribute in WLM classification, allowing for identification of mobile transactions and associated processor consumption.  These Reporting Attributes were classified as NONE, MOBILE, CATEGORYA and CATEGORYB.  Obviously IBM made allowances for future workload classifications, hence it would seem Public Cloud will supplement Mobile transactions.

In a previous z/OS Workload Manager (WLM): Balancing Cost & Performance blog post, we considered the merits of WLM for optimizing z/OS software costs, while maintaining optimal performance.  One must draw one’s own conclusions, but there seemed to be a strong case for WLM reporting to be included in the z/OS MLC Cost Manager toolkit.  The introduction of zWPC, being analogous to MWP, where reporting can be simplified with supplied and supported WLM function, indicates that intelligent and proactive WLM reporting makes sense.  Certainly for 3rd party Soft-Capping solutions, the ability to identify MWP and zWPC eligible transactions in real-time, proactively implementing MSU optimization activities seems mandatory.

The Workload X-Ray (WLXR) solution from zIT Consulting delivers this WLM reporting function, seamlessly integrating with their zDynaCap and zPrice Manager MSU optimization solutions.  Of course, there is always the possibility to create your own bespoke reports to extract the relevant information from SMF records and subsystem diagnostic data, for input to the SCRT process.  However, such a home-grown process will only work on a monthly reporting basis and not integrate with any Soft-Capping MSU management, which will ultimately control z/OS MLC costs.

In conclusion, from a big picture viewpoint, in the last 2 years or so, IBM have introduced several new Sub-Capacity pricing mechanisms to help System z Mainframe users optimize z/OS MLC costs, namely Mobile Workload Pricing (MWP), Country Multiplex Pricing (CMP) and now z Systems Workload Pricing for Cloud (zWPC).  In theory, at least one of these new pricing mechanisms should deliver benefit to the committed System z user, deploying this server for strategic and Mission Critical workloads.  With the undoubted strategic importance associated with Analytics, Blockchain, Cloud, DevOps, Mobile, Social, et al, the landscape for System z workloads is rapidly evolving and potentially impacting those sacrosanct legacy Mission Critical workloads.  Seemingly the realm of possibility exists that Cloud and Mobile originated transactions will dominate access to System z Mainframe System Of Record (SOR) data repositories, which generates a requirement to optimize associated MLC costs accordingly.  Of course, for some System z users, such Cloud and Mobile access might not be on today’s to-do list, but inevitably it’s on the horizon, and so why not implement the instrumentation ability ASAP!