The Ever Changing IBM Z Mainframe Disaster Recovery Requirement

With a 50+ year longevity, of course the IBM Z Mainframe Disaster Recovery (DR) requirement and associated processes have changed and evolved accordingly.  Initially, the primary focus would have been HDA (Head Disk Assembly) related, recovering data due to hardware (E.g. 23nn, 33nn DASD) failures.  It seems incredulous in the 21st Century to consider the downtime and data loss with such an event, but these failures were commonplace into the early 1980’s.  Disk drive (DASD) reliability increased with the 3380 device in the 1980’s and the introduction of the 3990-03 Dual Copy capability in the late 1980’s eradicated the potential consequences of a physical HDA failure.

The significant cost of storage and CPU resources dictated that many organizations had to rely upon 3rd party service providers for DR resource provision.  Often this dictated a classification of business applications, differentiating between Mission Critical or not, where DR backup and recovery processes would be application based.  Even the largest of organizations that could afford to duplicate CPU resource, would have to rely upon the Ford Transit Access Method (FTAM), shipping physical tape from one location to another and performing proactive or more likely reactive data restore activities.  A modicum of database log-shipping over SNA networks automated this process for Mission Critical data, but successful DR provision was still a major consideration.

Even with the Dual Copy function, this meant DASD storage resources had to be doubled for contingency purposes.  Therefore this dictated only the upper echelons of the business world (I.E. Financial Organizations, Telecommunications Suppliers, Airlines, Etc.) could afford the duplication of investment required for self-sufficient DR capability.  Put simply, a duplication of IBM Mainframe CPU, Network and Storage resources was required…

The 1990’s heralded a significant evolution in generic IT technology, including IBM Mainframe.  The adoption of RAID technology for IBM Mainframe Count Key Data (CKD) provided an affordable solution for all IBM Mainframe users, where RAID-5(+) implementations became commonplace.  The emergence of ESCON/FICON channel connectivity provided the extended distance requirement to complement the emerging Parallel SYSPLEX technology, allowing IBM Mainframe servers and related storage to be geographically dispersed.  This allowed a greater number of IBM Mainframe customers to provision their own in-house DR capability, but many still relied upon physical tape shipment to a 3rd party DR services provider.

The final significant storage technology evolution was the Virtual Tape Library (VTL) structure, introduced in the mid-1990’s.  This technology simplified capacity optimization for physical tape media, while reducing the number of physical drives required to satisfy the tape workload.  These VTL structures would also benefit from SYSPLEX implementations, but for many IBM Mainframe users, physical tape shipment might still be required.  Even though the IBM Mainframe had supported IP connectivity since the early 1990’s, using this network capability to ship significant amounts of data was dependent upon public network infrastructures becoming faster and more affordable.  In the mid-2000’s, transporting IBM Mainframe backup data via extended network carriers, beyond the limit of FICON technologies became more commonplace, once again, changing the face of DR approaches.

More recently, the need for Grid configurations of 2, 3 or more locations has become the utopia for the Global 1000 type business organization.  Numerous copies of synchronized Mission Critical if not all IBM Z Mainframe data are now maintained, reducing the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) DR criteria to several Minutes or less.

As with anything in life, learning from the lessons of history is always a good thing and for each and every high profile IBM Z Mainframe user (E.g. 5000+ MSU), there are many more smaller users, who face the same DR challenges.  Just as various technology races (E.g. Space, Motor Sport, Energy, et al) eventually deliver affordable benefit to a wider population, the same applies for the IBM Z Mainframe community.  The commonality is the challenges faced, where over the years, DR focus has either been application or entire business based, influenced by the technologies available to the IBM Mainframe user, typically dictated by cost.  However, the recent digital data explosion generates a common challenge for all IT users alike, whether large or small.  Quite simply, to remain competitive and generate new business opportunities from that priceless and unique resource, namely business data, organizations must embrace the DevOps philosophy.

Let’s consider the frequency of performing DR tests.  If you’re a smaller IBM Z Mainframe user, relying upon a 3rd party DR service provider, your DR test frequency might be 1-2 tests per year.  Conversely if you’re a large IBM z Mainframe user, deploying a Grid configuration, you might consider that your business no longer has the requirement for periodic DR tests?  This would be a dangerous thought pattern, because it was forever thus, SYSPLEX and Grid configurations only safeguard from physical hardware scenarios, whereas a logical error will proliferate throughout all data copies, whether, 2, 3 or more…

Similarly, when considering the frequency of Business Application changes, for the archetypal IBM Z Mainframe user, this might have been Monthly or Quarterly, perhaps with imposed change freezes due to significant seasonal or business peaks.  However, in an IT ecosystem where the IBM Z Mainframe is just another interconnected node on the network, the requirement for a significantly increased frequency of Business Application changes arguably becomes mandatory.  Therefore, once again, if we consider our frequency of DR tests, how many per year do we perform?  In all likelihood, this becomes the wrong question!  A better statement might be, “we perform an automated DR test as part of our Business Application changes”.  In theory, the adoption of DevOps either increases the frequency of scheduled Business Application changes, or organization embraces an “on demand” type approach…

We must then consider which IT Group performs the DR test?  In theory, it’s many groups, dictated by their technical expertise, whether Server, Storage, Network, Database, Transaction or Operations based.  Once again, if embracing DevOps, the Application Development teams need to be able to write and test code, while the Operations teams need to implement and manage the associated business services.  In such a model, there has to be a fundamental mind change, where technical Subject Matter Experts (SME) design and implement technical processes, which simplify the activities associated with DevOps.  From a DR viewpoint, this dictates that the DevOps process should facilitate a robust DR test, for each and every Business Application change.  Whether an organization is the largest or smallest of IBM Z Mainframe user is somewhat arbitrary, performing an entire system-wide DR test for an isolated Business Application change is not required.  Conversely, performing a meaningful Business Application test during the DevOps code test and acceptance process makes perfect sense.

Performing a meaningful Business Application DR test as part of the DevOps process is a consistent requirement, whether an organization is the largest or smallest IBM Z Mainframe user.  Although their hardware resource might differ significantly, where the largest IBM Z Mainframe user would typically deploy a high-end VTL (I.E. IBM TS77n0, EMC DLm 8n00, Oracle VSM, et al), the requirement to perform a seamless, agile and timely Business Application DR test remains the same.

If we recognize that the IBM Z Mainframe is typically deployed as the System Of Record (SOR) data server, today’s 21st century Business Application incorporates interoperability with Distributed Systems (E.g. Wintel, UNIX, Linux, et al) platforms.  In theory, this is a consideration, as mostly, IBM Z Mainframe data resides in proprietary 3390 DASD subsystems, while Distributed Systems data typically resides in IP (NFS, NAS) and/or FC (SAN) filesystems.  However, the IBM Z Mainframe has leveraged from Distributed Systems technology advancements, where typical VTL Grid configurations utilize proprietary IP connected disk arrays for VTL data.  Ultimately a VTL structure will contain the “just in case” copy of Business Application backup data, the very data copy required for a meaningful DR test.  Wouldn’t it be advantageous if the IBM Z Mainframe backup resided on the same IP or FC Disk Array as Distributed Systems backups?

Ultimately the high-end VTL (I.E. IBM TS77n0, EMC DLm 8n00, Oracle VSM, et al) solutions are designed for the upper echelons of the business and IBM Z Mainframe world.  Their capacity, performance and resilience capability is significant, and by definition, so is the associated cost.  How easy or difficult might it be to perform a seamless, agile and timely Business Application DR test via such a high-end VTL?  Are there alternative options that any IBM Z Mainframe user can consider, regardless of their size, whether large or small?

The advances in FICON connectivity, x86/POWER servers and Distributed Systems disk arrays has allowed for such technologies to be packaged in a cost efficient and small footprint IBM Z VTL appliance.  Their ability to connect to the IBM Z server via FICON connectivity, provide full IBM Z tape emulation and connect to ubiquitous IP and FC Distributed Systems disk arrays, positions them for strategic use by any IBM Z Mainframe user for DevOps DR testing.  Primarily one consistent copy of enterprise wide Business Application data would reside on the same disk array, simplifying the process of recovering Point-In-Time backup data for DR testing.

On the one hand, for the smaller IBM Z user, such an IBM Z VTL appliance (E.g. Optica zVT) could for the first time, allow them to simplify their DR processes with a 3rd party DR supplier.  They could electronically vault their IBM Z Mainframe backup data to their 3rd party DR supplier and activate a totally automated DR invocation, as and when required.  On the other hand, moreover for DevOps processes, the provision of an isolated LPAR, would allow the smaller IBM Z Mainframe user to perform a meaningful Business Application DR test, in-house, without impacting Production services.  Once again, simplifying the Business Application DR test process applies to the largest of IBM Z Mainframe users, and leveraging from such an IBM Z VTL appliance, would simplify things, without impacting their Grid configuration supporting their Mission critical workloads.

In conclusion, there has always been commonality in DR processes for the smallest and largest of IBM Z Mainframe users, where the only tangible difference would have been budget related, where the largest IBM Z Mainframe user could and in fact needed to invest in the latest and greatest.  As always, sometimes there are requirements that apply to all, regardless of size and budget.  Seemingly DevOps is such a requirement, and the need to perform on-demand seamless, agile and timely Business Application DR tests is mandatory for all.  From an enterprise wide viewpoint, perhaps a modicum of investment in an affordable IBM Z VTL appliance might be the last time an IBM Z Mainframe user needs to revisit their DR testing processes!

IBM Z Server: Best In Class For Availability – Does Form Factor Matter?

A recent ITIC 2017 Global Server Hardware and Server OS Reliability Survey classified the IBM Z server as delivering the highest levels of reliability/uptime, delivering ~8 Seconds or less of unplanned downtime per month.  This was the 9th consecutive year that such a statistic had been recorded for the IBM Z Mainframe platform.  This compares to ~3 Minutes of unplanned downtime per month for several other specialized server technologies, including IBM POWER, Cisco UCS and HP Integrity Superdome via the Linux Operating System.  Clearly, unplanned server downtime is undesirable and costly, impacting the bottom line of the business.  Industry Analysts state that ~80% of global business require 99.99% uptime, equating to ~52.5 Minutes downtime per year or ~8.66 Seconds per day.  In theory, only the IBM Z Mainframe platform exceeds this availability requirement, while IBM POWER, Cisco UCS and HP Integrity Superdome deliver borderline 99.99% availability capability.  The IBM Mainframe is classified as a mission-critical resource in 92 of the top 100 global banks, 23 of the top 25 USA based retailers, all 10 of the top 10 global insurance companies and 23 of the top 25 largest airlines globally…

The requirement for ever increasing amounts of corporate compute power is without doubt, satisfying the processing of ever increasing amounts of data, created from digital sources, including Cloud, Mobile and Social, requiring near real-time analytics to deliver meaningful information from these oceans of data.  Some organizations select x86 server technology to deliver this computing power requirement, either in their own Data Centre or via a 3rd party Cloud Provider.  However, with unplanned downtime characteristics that don’t meet the seeming de facto 99.99% uptime availability metric, can the growth in x86 server technology continue?  From many perspectives, Reliability, Availability & Serviceability (RAS), Data Security via Pervasive Encryption and best-in-class Performance and Scalability, you might think that the IBM Z Mainframe would be the platform of choice?  For whatever reason, this is not always the case!  Maybe we need to look at recent developments and trends in the compute power delivery market and second guess what might happen in the future…

Significant Cloud providers deliver vast amounts of computing power and associated resources, evolving their business models accordingly.  Such business models have many challenges, primarily uptime and data security related, convincing their prospective customers to migrate their workloads from traditional internal Data Centres, into these massive rack provisioned infrastructures.  Recently Google has evolved from using Intel as its primary supplier for Data Centre CPU chips, including CPU chips from IBM and other semiconductor rivals.

In April 2016, Google declared it had ported its online services to the IBM POWER CPU chip and that its toolchain could output code for Intel x86, IBM POWER and 64-bit ARM cores at the flip of a command-line switch.  As part of the OpenPOWER and Open Compute Project (OCP) initiatives, Google, IBM and Rackspace are collaborating to develop an open server specification based on the IBM POWER9 architecture.  The OCP Rack & Power Project will dictate the size and shape or form factor for housing these industry standard rack infrastructures.  What does this mean for the IBM Z server form factor?

Traditionally and over the last decade or more, IBM has utilized the 24 Inch rack form factor for the IBM Z Mainframe and Enterprise Class POWER Systems.  Of course, this is a different form factor to the industry standard 19 Inch rack, which finally became the de facto standard for the ubiquitous blade server.  Unfortunately there was no tangible standard for a 19 Inch rack, generating power, cooling and other issues.  Hence the evolution of the OCP Rack & Power Standard, codenamed Open Rack.  Google and Facebook have recently collaborated to evolve the Open Rack Standard V2.0, based upon an external 21 Inch rack Form factor, accommodating the de facto 19 Inch rack mounted equipment.

How do these recent developments influence the IBM Z platform?  If you’re the ubiquitous global CIO, knowing your organizations requires 99.99%+ uptime, delivering continuous business application change via DevOps, safeguarding corporate data with intelligent and system wide encryption, perhaps you still view the IBM Z Mainframe as a proprietary server with its own form factor?

As IBM have already demonstrated with their OpenPOWER offering, collaborating with Google and Rackspace, their 24 Inch rack approach can be evolved, becoming just another CPU chip in a Cloud (E.g. IaaS, Paas) service provider environment.  Maybe the final evolution step for the IBM Z Mainframe is evolving its form factor to a ubiquitous 19 Inch rack format?  The intelligent and clearly defined approach of the Open Rack Standard makes sense and if IBM could deliver an IBM Z Server in such a format, it just becomes another CPU chip in the ubiquitous Cloud (E.g. IaaS, Paas) service provider environment.  This might be the final piece of the jigsaw for today’s CIO as their approach to procuring compute power might be based solely upon the uptime and data security metrics.  For those organizations requiring in excess of 99.99% uptime and fully compliant security, there only seems to be one choice, the IBM Z Mainframe CPU chip technology, which has been running Linux workloads since 2000!

IBM z14: Pervasive Encryption & Container Pricing

On 17 July 2017 IBM announced the z14 server as “the next generation of the world’s most powerful transaction system, capable of running more than 12 billion encrypted transactions per day.  The new system also introduces a breakthrough encryption engine that, for the first time, makes it possible to pervasively encrypt data associated with any application, cloud service or database all the time”.

At first glance, a cursory review of the z14 announcement might just appear as another server upgrade release, but that could be a costly mistake by the reader.  There are always subtle nuisances in any technology announcement, while finding them and applying them to your own business can sometimes be a challenge.  In this particular instance, perhaps one might consider “Persuasive Encryption & Contained Pricing”…

When IBM releases a new generation of z Systems server, many of us look to the “feeds and speeds” data and ponder how that might influence our performance and capacity profiles.  IBM state the average z14 speed compared with a z13 increase by ~10% for 6-way servers and larger.  As per usual, there are software Technology Transition Offering (TTO) discounts ranging from 6% to 21% for z14 only sites.  However, in these times where workload profiles are rapidly changing and evolving, it’s sometimes easy to overlook that IBM have to consider the holistic position of the IBM Z world.  Quite simply, IBM has many divisions, Hardware, Software, Services, et al.  Therefore there has to be interaction between the hardware and software divisions and in this instance, IBM have delivered a z14 server that is security focussed, with their Pervasive Encryption functionality.

Pervasive Encryption provides a simple and transparent approach for z Systems security, enabling the highest levels of data encryption for all data usage scenarios, for example:

  • Processing: When retrieved from files and processed by applications
  • In Flight: When being transmitted over internal and external networks
  • At Rest: When stored in database structures or files
  • In Store: When stored in magnetic storage media

Pervasive Encryption simplifies and reduces costs associated when protecting data by policy (I.E. Subset) or En Masse (I.E. All Of The Data, All Of the Time), achieving compliance mandates.  When considering the EU GDPR (European Union General Data Protection Regulation) compliance mandate, companies must notify relevant parties within 72 hours of first having become aware of a personal data breach.  Additionally organizations can be fined up to 4% of annual global turnover or €20 Million (whichever is greater), for any GDPR breach unless they can demonstrate that data was encrypted and keys were protected.

To facilitate this new approach for encryption, the IBM z14 infrastructure incorporates several new capabilities integrated throughout the technology stack, including Hardware, Operating System and Middleware.  Integrated CPU chip cryptographic acceleration is enhanced, delivering ~600% increased performance when compared with its z13 predecessor and ~20 times faster than competitive server platforms.  File and data set encryption is optimized within the Operating Systems (I.E. z/OS), safeguarding transparent and optimized encryption, not impacting application functionality or performance.  Middleware software subsystems including DB2 and IMS leverage from these Pervasive Encryption techniques, safeguarding that High Availability databases can be transitioned to full encryption without stopping the database, application or subsystem.

Arguably IBM had to deliver this type of security functionality for its top tier z Systems customers, as inevitably they would be impacted by compliance mandates such as GDPR.  Conversely, the opportunity to address the majority of external hacking scenarios with one common approach is an attractive proposition.  However, as always, the devil is always in the detail, and given an impending deadline date of May 2018 for GDPR compliance, I wonder how many z Systems customers could implement the requisite z14 hardware and related Operating System (I.E. z/OS) and Subsystem (I.E. CICS, DB2, IMS, MQ, et al) .upgrades before this date?  From a bigger picture viewpoint, Pervasive Encryption does offer the requisite functionality to apply a generic end-to-end process for securing all data, especially Mission Critical data…

Previously we have considered the complexity of IBM z Systems pricing mechanisms and in theory, the z14 announcement tried to simplify some of these challenges by building upon and formalizing Container Pricing.  Container Pricing is intended to greatly simplify software pricing for qualified collocated workloads, whether collocated with other existing workloads on the same LPAR, deployed in a separate LPAR or across multiple LPARs.  Container pricing allows the specified workload to be separately priced based on a variety of metrics.  New approved z/OS workloads can be deployed collocated with other sub-capacity products (I.E. CICS, DB2, IMS, MQ, z/OS) without impacting cost profiles of existing workloads.

As per most new IBM z Systems pricing mechanisms of late, there is a commercial collaboration and exchange required between IBM and their customer.  Once a Container Pricing solution is agreed between IBM and their customer, for an agreed price, an IBM Sales order is initiated, triggering the creation of an Approved Solution ID.  The IBM provided solution ID is a 64-character string representing an approved workload with an entitled MSU capacity, representing a Full Capacity Pricing Container used for billing purposes.

Previously we considered the importance of WLM for managing z/OS workloads and its interaction with soft-capping, and this is reinforced with this latest IBM Container Pricing mechanism.  The z/OS Workload Manager (WLM) enables Container Pricing using a resource classified as the Tenant Resource Group (TRG), defining the workload in terms of address spaces and independent enclaves.  The TRG, combined with a unique Approved Solution ID, represents the IBM approved solution.  As per standard SCRT processing, workload instrumentation data is collected, safeguarding that this workload profile does not directly impact the traditional peak LPAR Rolling Four-Hour Average (R4HA).  The TRG also allows the workload to be metered and optionally capped, independent of other workloads that are running collocated in the LPAR.

MSU utilization of the defined workload is recorded by WLM and RMF, subsequently processed by SCRT to subtract the solution MSU capacity from the LPAR R4HA.  The solution can then be priced independently, based on MSU resource consumed by the workload, or based upon other non-MSU values, specifically a Business Value Metric (E.g. Number of Payments).  Therefore Container Pricing is much simpler and much more flexible than previous IBM collocated workload mechanism, namely IWP and zCAP.

Container Pricing eliminates the requirement to commission specific new environments to optimize MLC pricing.  By deploying a standard IBM process framework, new workloads can be commissioned without impacting the R4HA of collocated workloads, being deployed as per business requirements, whether on the same LPAR, a separate LPAR, or dispersed across multiple LPAR structures.  Quite simply, the standard IBM process framework is the Approved Solution ID, associating the client based z/OS system environment to the associated IBM sales contract.

In this first iteration release associated with the z14 announcement, Container Pricing can be deployed in the following three solution based scenarios:

  • Application Development and Test Solution: Add up to 3 times more capacity to existing Development and Test environments without any additional monthly licensing costs; or create new LPAR environments with competitive pricing.
  • New Application Solution: Add new z/OS microservices or applications, priced individually without impacting the cost of other workloads on the same system.
  • Payments Pricing Solution: A single agreed value based price for software plus hardware or just software, via a number of payments processed metric, based on IBM Financial Transaction Manager (FTM) software.

IBM state z14 support for a maximum 2 million Docker containers in an associated maximum 32 TB memory configuration.  In conjunction with other I/O enhancements, IBM state a z14 performance increase of ~300%, when compared with its z13 predecessor.  Historically the IBM Z platform was never envisaged as being the ideal container platform.  However, its ability to seamlessly support z/OS and Linux, while the majority of mission critical Systems Of Record (SOR) data resides on IBM Z platforms, might just be a compelling case for microservices to be processed on the IBM Z platform, minimizing any data latency transfer.

Container Pricing for z/OS is somewhat analogous to the IBM Cloud Managed Services on z Systems pricing model (I.E. CPU consumption based).  Therefore, if monthly R4HA peak processing is driven by an OLTP application, or any other workload for that matter, any additional unused capacity in that specific SCRT reporting month can be allocated for no cost to other workloads.  Therefore z/OS customers will be able to take advantage of this approach, processing collocated microservices or applications for a zero or nominal cost.

County Multiplex Pricing (CMP) Observation: The z14 is the first new generation of IBM Z hardware since the introduction of the CMP pricing mechanism.  When a client first implements a Multiplex, IBM Z server eligibility cannot be older than two generations (I.E. N-2) prior to the most recently available server (I.E. N).  Therefore the General Availability (GA) of z14, classifies the z114 and z196 servers as previously eligible CMP machines.  IBM will provide a 3 Month grace period for CMP transition activities for these N-3 servers, namely z114 and z196.  Quite simply, the first client CMP invoice must be submitted within 90 days of the z14 GA date, namely 13 September 2017, no later than 1 January 2018.

In conclusion, Pervasive Encryption is an omnipresent z14 function integrated into every data lifecycle stage, which could easily be classified as Persuasive Encryption, simplifying the sometimes arduous process of classifying and managing mission-critical data.  As cybersecurity becomes an omnipresent clear and present danger, associated with impending and increasingly punitive compliance mandates such as GDPR, the realm of possibility exists to resolve this high profile corporate challenge once and for all.

Likewise, Container Pricing provides a much needed simple-to-use framework to drive MSU cost optimization for new workloads and could easily be classified as Contained Pricing.  The committed IBM Mainframe customer will upgrade their z13 server environment to z14, as part of their periodic technology refresh approach.  Arguably, those Mainframe customers who have been somewhat hesitant in upgrading from older technology Mainframe servers, might just have a compelling reason to upgrade their environments to z14, safeguarding cybersecurity challenges and evolving processes to contain z/OS MLC costs.

z Systems Software & Applications Currency: MQ Continuous & as a Service Delivery Models

In a rapidly-moving technology environment where DevOps is driving innovation for the rapid delivery of applications, are there any innovations for the related z Systems infrastructure (E.g. z/OS, CICS, DB2, IMS, MQ, WebSphere AS) that can deliver faster software and indeed firmware updates?

In April 2016, IBM announced MQ V9.0, delivering new and enhanced capabilities facilitating a Continuous Delivery and support model.  The traditional Long Term Support release offers the ubiquitous collection of aggregated fix-packs, applied to the delivered MQ V9.0 function.  The new Continuous Delivery release delivers both fixes and new functional enhancements as a set of modification-level updates, facilitating more rapid access to functional enhancements.

Form a terminology viewpoint, the Continuous Delivery (CD) support model, introduces new function and enhancements, made available by incremental updates within the same version and release.  Additionally, there will also be a Long Term Support (LTS) release available for deployments that require traditional medium-long term security and defect fixes only.  Some might classify such LTS fixes as Service Pack or Level Set patching.  The Continuous Delivery (CD) support approach delivers regular updates with a short-term periodic frequency for customers wanting to exploit the latest features and capabilities of MQ, without waiting for the next long term support release cycle.  In terms of timeframe, although there is no fixed time period associated with a CD or LTS release, typically CD is every few months, while LTS releases are every two years or so.  In actual IBM announcement terms, the latest MQ release was V9.0.3 in May 2017, meaning four MQ V9.0.n release activities in a ~13 Month period, approximately quarterly…

The benefits of this CD support model are obvious, for those organizations who consider themselves to be leading-edge or “amongst the first”, they can leverage from new function ASAP, with a modicum of confidence that the code has a good level of stability.  Those customers with a more cautious approach, can continue their ~2 year software upgrade cycle, applying the LTS release.  As always with software maintenance, there has never been a perfect approach.  Inevitably there will by High Impact or PERvasive (HIPER) and PTF-in Error (PE) PTF requirements, as software function stability is forever evolving.  Therefore, arguably those sites leveraging from the latest function have always been running from a Continuous Delivery software maintenance model; they just didn’t know when and how often!

Of all the major IBM z Systems subsystems to introduce this new software support model first, clearly the role of MQ dictates that for many reasons, primarily middleware and interoperability based, MQ needs a Continuous Delivery (CD) model.

At this stage, let’s remind ourselves of the important role that MQ plays in our IT infrastructures.  IBM MQ is a robust messaging middleware solution, simplifying and accelerating the integration of diverse applications and business data, typically distributed over multiple platforms and geographies.  MQ facilitates the assured, secure and reliable exchange of information between applications, systems, services, and files.  This exchange of information is achieved through the sending and receiving of message data through queues and topics, simplifying the creation and maintenance of business applications.  MQ delivers a universal messaging solution incorporating a broad set of offerings, satisfying enterprise requirements, in addition to providing 21st century connectivity for Mobile and the Internet of Things (IoT) devices.

Because of the centralized role that MQ plays, its pivotal role of interconnectivity might be hampered by the DevOps requirement of rapid application delivery, for both planned and unplanned business requirements.  Therefore even before the concept of MQ Continuous Delivery (CD) was announced in April 2016, there was already talk of MQ as a Service (MQaaS).

As per any major z Systems subsystem, traditionally IBM MQ was managed by a centralized messaging middleware team, collaborating with their Application, Database and Systems Management colleagues.  As per the DevOps methodology, this predictable and centralized approach, does no lend itself to rapid and agile Application Development.  Quite simply an environment management decentralization process is required, to satisfy the ever-increasing speed and diversity of application design and delivery requests.  By definition, MQ seamlessly interfaces with so many technologies, including but not limited to, Amazon Web Services, Docker, Google Cloud Platform, IBM Bluemix, JBoss, JRE, Microsoft Azure, Oracle Fusion Middleware, OpenStack, Salesforce, Spark, Ubuntu, et al.

The notional concept of MQ as a Service (MQaaS), delivers a capability to implement self-service portals, allowing Application Developers and their interconnected Line of Business (LOB) personnel to drive changes to the messaging ecosystem.  These changes might range from the creation or deletion of a messaging queue to the provision of a highly available and scalable topology for a new business application.  The DevOps and Application Lifecycle Management (ALM) philosophy dictates that the traditional centralized messaging middleware team must evolve, reducing human activity, by automating their best practices.  Therefore MQaaS can increase the speed at which the infrastructure team can deliver new MQ infrastructure to their Application Development community, while safeguarding the associated business requirements.

MQ provides a range of control commands and MQ Script Commands (MQSC) to support the creation and management of MQ resources using scripts.  Programmatic resource access is achievable via MQ Programmable Command Format (PCF) messages, once access to a queue manager has been established.  Therefore MQ administrators can create workflows that drive these processes, delivering a self-service interface.  Automation frameworks, such as UrbanCode Deploy (UCD), Chef and Puppet functions can be used to orchestrate administrative operations for MQ, to create and manage entire application or server environments.  Virtual machines, Docker containers, PureApplication System and the MQ Appliance itself can be used alongside automation frameworks to create a flexible and scalable ecosystem, for delivering the MQaaS infrastructure.

Integrating the MQ as a Service concept within your DevOps and Application Lifecycle Management process delivers the following benefits:

  • Development Agility: Devolving traditional MQ administration activities to Application and Line of Business personnel, allows them to directly provision or update the associated messaging resources. This optimizes the overall process, while DevOps processes facilitates the requisite IT organization communication.
  • Process Standardization: Enabling a self-service interface to Application and Line of Business personnel delivers a single entry point for messaging configuration changes. This common interface will leverage from consistent routines and workflows to deploy the necessary changes, enforcing standards and consistency.
  • Personnel Optimization: Self-service interfaces used by Application and Line of Business personnel allow them to focus on core application requirements, primarily messaging and Quality of Service (QoS) related. In such an environment, the background process of performing the change is arbitrary, timely change implementation is the most important factor.
  • Environment Interoperability: An intelligent and automated self-service interface allows for dynamic provisioning of systems and messaging resources for development and testing purposes. This automation can simplify the promotion of changes throughout the testing lifecycle (E.g. Development, Test, Quality Assurance, Production, et al).  As and when required, such automation can provide capacity-on-demand type changes, dynamically scaling an application, as and when required, to satisfy ever-changing and unpredictable business requirements.

In conclusion, DevOps is an all-encompassing framework and one must draw one’s own conclusions as to whether software update frequency timescales will reduce for major subsystems such as CICS, DB2, IMS and even the underlying Operating System itself, namely z/OS.  Conversely, the one major z Systems subsystem with so many interoperability touch points, namely MQ, is the obvious choice for applying DevOps techniques to underlying subsystem software.  For MQ, the use of a Continuous Delivery (CD) software support model safeguards that the latest new function and bug fix capability is delivered in a timely manner for those organizations striving for an agile environment.  Similarly, the consideration of devolving traditional MQ systems administration activities, via intelligent, automated and self-service processes to key Application and Line of Business personnel makes sense, evolving a pseudo MQaaS capability.

Optimize Your System z ROI with z Operational Insights (zOI)

Hopefully all System z users are aware of the Monthly Licence Charge (MLC) pricing mechanisms, where a recurring charge applies each month.  This charge includes product usage rights and IBM product support.  If only it was that simple!  We then encounter the “Alphabet Soup” of acronyms, related to the various and arguably too numerous MLC pricing mechanism options.  Some might say that 13 is an unlucky number and in this case, a System z pricing specialist would need to know and understand each of the 13 pricing mechanisms in depth, safeguarding the lowest software pricing for their organization!  Perhaps we could apply the unlucky word to such a resource.  In alphabetical order, the 13 MLC pricing options are AWLC, AEWLC, CMLC, EWLC, MWLC, MzNALC, PSLC, SALC, S/390 Usage Pricing, ULC, WLC, zELC and zNALC!  These mechanisms are commercial considerations, but what about the technical perspective?

Of course, System z Mainframe CPU resource usage is measured in MSU metrics, where the usage of Sub-Capacity allows System z Mainframe users to submit SCRT reports, incorporating Monthly License Charges (MLC) and IPLA software maintenance, namely Subscription and Support (S&S).  We then must consider the Rolling 4-Hour Average (R4HA) and how best to optimize MSU accordingly.  At this juncture, we then need to consider how we measure the R4HA itself, in terms of performance tuning, so we can minimize the R4HA MSU usage, to optimize cost, without impacting Production if not overall system performance.

Finally, we then have to consider that WLC has a ~17-year longevity, having been announced in October 2000 and in that time IBM have also introduced hardware features to assist in MSU optimization.  These hardware features include zIIP, zAAP, IFL, while there are other influencing factors, such as HyperDispatch, WLM, Relative Nest Intensity (RNI), naming but a few!  The Alphabet Soup continues…

In summary, since the introduction of WLC in Q4 2000, the challenge for the System z user is significant.  They must collect the requisite instrumentation data, perform predictive modelling and fully comprehend the impact of the current 13 MLC pricing mechanisms and their interaction with the ever-evolving System z CPU chip!  In the absence of such a simple to use reporting capability from IBM, there are a plethora of 3rd party ISV solutions, which generally are overly complex and require numerous products, more often than not, from several ISV’s.  These software solutions process the instrumentation data, generating the requisite metrics that allows an informed decision making process.

Bottom Line: This is way too complex and are there any Green Shoots of an alternative option?  Are there any easy-to-use data analytics based options for reducing MSU usage and optimizing CPU resources, which can then be incorporated into any WLC/MLC pricing considerations?

In February 2016 IBM launched their z Operational Insights (zOI) offering, as a new open beta cloud-based service that analyses your System z monitoring data.  The zOI objective is to simplify the identification of System z inefficiencies, while identifying savings options with associated implementation recommendations. At this juncture, zOI still has a free edition available, but as of September 2016, it also has a full paid version with additional functionality.

Currently zOI is limited to the CICS subsystem, incorporating the following functions:

  • CICS Abend Analysis Report: Highlights the top 10 types of abend and the top 10 most abend transactions for your CICS workload from a frequency viewpoint. The resulting output classifies which CICS transactions might abend and as a consequence, waste processor time.  Of course, the System z Mainframe user will have to fix the underlying reason for the CICS abend!
  • CICS Java Offload Report: Highlights any transaction processing workload eligible for IBM z Systems Integrated Information Processor (zIIP) offload. The resulting output delivers three categories for consideration.  #1; % of existing workload that is eligible for offload, but ran on a General Purpose CP.  #2; % of workload being offloaded to zIIP.  #3; % of workload that cannot be transferred to a zIIP.
  • CICS Threadsafe Report: Highlights threadsafe eligible CICS transactions, calculating the switch count from the CICS Quasi Reentrant Task Control Block (QR TCB) per transaction and associated CPU cost. The resulting output identifies potential CPU savings by making programs threadsafe, with the associated CICS subsystem changes.
  • CICS Region CPU Constraint: Highlights CPU constrained regions. CPU constrained CICS regions have reduced performance, lower throughput and slower transaction response, impacting business performance (I.E. SLA, KPI).  From a high-level viewpoint, the resulting output classifies CICS Region performance to identify whether they’re LPAR or QR constrained, while suggesting possible remedial actions.

Clearly the potential of zOI is encouraging, being an easy-to-use solution that analyses instrumentation data, classifies the best options from a quick win basis, while providing recommendations for implementation.  Having been a recent user of this new technology myself, I would encourage each and every System z Mainframe user to try this no risk IBM z Operational Insights (zOI) software offering.

The evolution for all System z performance analysis software solutions is to build on the comprehensive analysis solutions that have evolved in the last ~20+ years, while incorporating intelligent analytics, to classify data in terms of “Biggest Impact”, identifying “Potential Savings”, evolving MIPS measurement, to BIPS (Biggest Impact Potential Savings) improvements!

IBM have also introduced a framework of IT Operations Analytics Solutions for z Systems.  This suite of interconnected products includes zOI, IBM Operations Analytics for z Systems, IBM Common Data Provider for z/OS and IBM Advanced Workload Analysis Reporter (IBM zAware).  Of course, if we lived in a perfect world, without a ~20 year MLC and WLC longevity, this might be the foundation for all of our System z CPU resource usage analysis.  Clearly this is not the case for the majority of System z Mainframe customers, but zOI does offer something different, with zero impact, both from a system impact and existing software interoperability viewpoint.

Bottom Line: Optimize Your System z ROI via zOI, Evolving From MIPS Measurement to BIPS Improvements!

System z: I/O Interoperability Evolution – From Bus & Tag to FICON

Since the introduction of the S/360 Mainframe in 1964 there has been a gradual evolution of I/O connectivity that has taken us from copper Bus & Tag to fibre ESCON and now FICON channels.  Obviously during this ~50 year period there have been exponentially more releases of Mainframe server and indeed Operating System.  In this timeframe there have been 2 significant I/O technology milestones.  Firstly, in 1990, ESCON was part of the significant S/390 announcement (MVS/ESA), where migration to ESCON was a great benefit, if only for replacing the heavy and big copper Bus & Tag channels.  Secondly, even though FICON was released in the late 1990’s, in 2009 IBM announced that the z10 would be the last Mainframe server to support greater than 240 native ESCON channels.  Similarly IBM declared that the last zEnterprise server to support ESCON channels are the z196 and z114 servers.  Each of these major I/O evolutions required a migration philosophy and not every I/O device would be upgraded to support either native ESCON of FICON channels.  How did customers achieve these mandatory I/O upgrades to safeguard IBM Mainframe Server and associated Operating System longevity?

In 2009 it was estimated ~20% of all Mainframe customers were using ESCON only I/O infrastructures, while only ~20% of all Mainframe customers were deploying a FICON only infrastructure.  Similarly ~33% of z9 and z10 systems were shipped with ESCON CVC (Block Multiplexor) and CBY (Byte Multiplexor) channels defined, while ~75% of all Mainframe Servers had native ESCON (CNC) capability.  From a dispassionate viewpoint, clearly the migration from ESCON to FICON was going to be a significant challenge, while even in this timeframe, there was still use of Bus & Tag channels…

One of the major strengths of the IBM Mainframe ecosystem is the partner network, primarily software (ISV) based, but with some significant hardware (IHV) providers.  From a channel switch viewpoint, we will all be familiar with Brocade, Cisco and McData, where Brocade acquired McData in 2006.  However, from a channel protocol conversion viewpoint, IBM worked with Optica Technologies, to deliver a solution that would allow the support for ESCON and Bus & Tag channels to the FICON only zBC12/zEC12 and future Mainframe servers (I.E. z13, z13s).  Somewhat analogous to the smartphone where the user doesn’t necessarily know that an ARM processor might be delivering CPU power to their phone, sometimes even seasoned Mainframe professionals might inadvertently overlook that the Optica Technologies Prizm solution has been or indeed is still deployed in their System z Data Centre…

When IBM work with a partner from an I/O connectivity viewpoint, clearly IBM have to safeguard that said connectivity has the highest interoperability capability with bulletproof data exchange attributes.  Sometimes we might take this for granted with the ubiquitous disk and tape subsystem suppliers (I.E. EMC, HDS, IBM, Oracle), but for FICON conversion support, Optica Technologies was a collaborative partner for IBM.  Ultimately the IBM Hardware Systems Assurance labs deploy their proprietary System Assurance Kernel (SAK) processes to safeguard I/O subsystem interoperability for their System z Mainframe servers.  Asking that rhetorical question; when was the last time you asked your IHV for site of their System Assurance Kernel (SAK) exit report from their collaboration with IBM Hardware Systems Assurance labs for their I/O subsystem you’re considering or deploying?  In conclusion, the SAK compliant, elegant, simple and competitively priced Prizm solution allowed the migration of tens if not hundreds of thousands of ESCON connections in thousands of Mainframe data centres globally!

With such a rich heritage of providing a valuable solution to the global IBM Mainframe install base, whether the smallest or largest, what would be next for Optica Technologies?  Obviously leveraging from their expertise in FICON channel support would be a good way forward.  With the recent acquisition of Bus-Tech by EMC and the eradication of the flexible MDL tapeless virtual tape offering, Optica Technologies are ideally placed to be that small, passionate and eminently qualified IHV to deliver a turnkey virtual tape solution for the smaller and indeed larger System z user.  The Optica Technologies zVT family leverages from the robust and heritage class Prizm technology, delivering an innovative family of virtual tape solutions.  The entry “Virtual Tape In A Box” zVT 3000i provides 2 FICON channel interfaces and 4 TB uncompressed internal RAID-5 disk space, seamlessly interfacing with all System z supported tape devices (I.E. 3490, 3590) and processes.  A single enterprise class zVT 5000-iNAS node delivers 2 FICON channel interfaces, NFS storage capacity from 8TB to 1PB in a single frame with standard deduplication, compression, replication and encryption features.  The zVT 5000-iNAS is available with multi-node configuration support for additional scalability and resiliency.  For those customers wishing to deploy their own choice of NFS or FC storage subsystem, the zVT 5000-FLEX allows such connectivity accordingly.

In conclusion, sometimes it’s all too easy to take some solutions for granted, when they actually delivered a tangible and arguably priceless solution in the evolution of your organizations System z Mainframe server journey from ESCON, if not Bus & Tag to FICON.  Perhaps the Prizm solution is one of these unsung products?  Therefore, the next time you’re reviewing the virtual tape market place, why wouldn’t you seriously consider Optica Technologies, given their rich heritage in FICON channel interoperability?  Given that IBM chose Optica Technologies as their strategic partner for ESCON to FICON migration, seemingly even IBM might have thought “nobody gets fired for choosing…”!

System z Batch Optimization: Another Pipes Option?

Over the last 20 years or so I have encountered many sites looking for solutions to streamline their batch processing, only to find that sometimes they are their own worst enemy, because their cautious Change Management approach means they will not change or even recompile COBOL application source, unless absolutely forced to do so.  Sometimes VSAM file tuning is the answer, sometimes identifying the batch critical path, and on occasion, finding that key file or database that is processed on several or more occasions, which might benefit from parallelism is the answer.

BatchPipes was first introduced with MVS/ESA, allowing for data (E.g. BSAM, QSAM) to be piped between several jobs, allowing concurrent job processing, reducing the combined elapsed time of the associated job stream.  BatchPipes maintains a queue of records that are passed between a writer and reader.  The writer adds records to the back of a pipe queue and the reader processes them from the front.  This record level processing approach avoids any potential data set serialization issues when attempting to concurrently write and read records from the same physical data set.

The IBM BatchPipes feature has evolved somewhat and BMC have offered similar functionality with their initial Data Accelerator and Batch Accelerator offering, subsequently superseded by MainView Batch Optimizer Job Optimizer Pipes.  It seems patently obvious that to derive the parallelism benefit offered by BatchPipes, the reader and writer jobs need to be processed together.  For many, such a consideration has been an issue that has eliminated any notion of BatchPipes implementation.  Other considerations include a job failure in the BatchPipes process, where restart and recovery might include several jobs, as opposed to one.  Therefore widespread usage of BatchPipes has been seemingly limited.

The first step for any BatchPipes consideration is identifying whether there is any benefit.  IBM provide a BatchPipes SMF analysis tool to determine the estimated time savings and benefits that can be achieved with BatchPipes.  This tool reads SMF record types 14, 15 and 30 (Subtypes 1, 4 and 5) to analyse data set read and write activity, reconciling with the associated processing job.  As an observation, sometimes a data source might have a different data set name, be both permanent and temporary, while consuming significant I/O and CPU resource for processing.  Such data source reconciliation can easily be achieved, as the record and associated I/O count for such a data source is the same, for entire data set processing operations.  The analysis tool will identify the heavy I/O jobs and be a great starting point for any analysis activities.

UNIX users will be very familiar with the concept of pipes, where a UNIX pipeline is a sequence of processes chained together by their standard streams, where the output of each process (stdout) feeds directly as input (stdin) to the next one.  Wouldn’t it be good if there was a hybrid approach to BatchPipes, using a combination of standard z/OS and extended UNIX Systems Services (USS)?

With z/OS 2.2, JES2 introduced new functions to facilitate the scheduling of dependent batch jobs.  These functions comprise Job Execution Control (JEC) and can be utilized by making use of the new JOBGROUP and related Job Control Language (JCL) statements.  The primary goal of JEC is to provide an easy-to-use control mechanism, allowing complex batch jobs to be processed in inter-related constituent pieces.  Presuming that these constituent pieces can be run in parallel, improved throughput can be achieved by exploiting the concurrency functions provided by JEC.

UNIX named pipes can be used to pass data between simultaneously executing jobs, where the UNIX pipe can either be temporary or permanent.  One or more processes can connect to a UNIX named pipe, write to it, and read from it, as and when required.  Unlike most types of z/OS UNIX files, data written to a named pipe is always appended to existing data rather than replacing existing data.  Therefore, the STOR command is equivalent to the APPE command when UNIXFILETYPE=FIFO is configured.  This UNIX pipe facility, managed by the JES2 JEC functions can be leveraged to provide benefit for multiple step job processing and concurrent job processing, with the overall benefit of a reduction in overall batch stream elapsed time.

In conclusion, the advancement in JES2 JEC processing simplifies the batch scheduling and restart configuration processing, while the usage of UNIX named pipes leverages from existing z/OS USS functionality, safeguarding good performance using a tried and tested process.

Finally, returning full circle to my initial observation of Change Management considerations when performing batch optimization initiatives; recently I worked with a customer I visited in 2001, where they considered and dismissed BatchPipes Version 2.  We piloted this new UNIX pipe facility in Q4 2016, in readiness for their Year End processing, where they finally delivered a much needed ~2 Hour reduction in their ~9 Hour Critical Path Year End batch process.  Sometimes patience is a virtue, assisted by a slight implementation tweak…

The Software Defined Mainframe (SDM): An Alternative Approach?

Some consider the IBM Mainframe to be the last bastion of proprietary computing platforms, for obvious reasons, namely the CPU server architecture and the single manufacturer, IBM.  The historical and legacy ability of said IBM Mainframe to transform Data Processing into Information Technology and still participating in the Digital Era is without doubt.  However, for many, the complicated and perceived ultra-expensive world of software pricing generate concern, largely based upon Fear, Uncertainty and Doubt (FUD), which might have generated years if not decades of under investment for those organizations with an IBM Mainframe.

Having worked with the IBM Mainframe for 35+ years, I have gained a knowledge that allows cost optimization and contemporaneous usability, which given the importance of the IBM Mainframe platform to IBM from a revenue viewpoint, will safeguard that the IBM Mainframe will have a long future.  However, the last decade or so has seen a rapid evolution in Open Source, DevOps, Enterprise Class Support for Distributed Platforms, Mobile and Cloud computing, et al, potentially generating an opportunity for the global IBM Mainframe user base to once again consider the platforms value proposition…

Let’s consider this server platform choice from a business viewpoint.  On the one hand, there are the well versed market statements, where 80%+ of corporate data resides or originates from IBM Mainframes, while IBM Mainframes enable 70%+ of global commercial transactions, et al.  In recent times there are global businesses, leveraging from the cloud or Linux Open Source technologies, to run their business.  For instance, Netflix reportedly runs its media on demand business via the Amazon Web Services (AWS) cloud, while said platform is facilitating a Data Centre reduction of 34 to 4 for General Electric (GE).  There are many other such “early adopters” of this commodity infrastructure provision opportunity, including Capital One, Hertz and Juniper, naming but a few.

Quite simply, the power of Mobile processors, primarily ARM and supporting software ecosystem empower each and every potential consumer with a palm sized smart computing platform, while the power and supporting software ecosystem of x86 processors, generate an environment for each and every global business, mature or not even launched, to deliver an eminently usable and scalable IT Infrastructure for their business model.

Of course, the IBM Mainframe can do this, it always has been at the forefront of IT architectures and always will be, but for the “naysayers”, its perceived high acquisition and running costs are always an easy target.  As somebody much cleverer than I once said, timing is everything, and we’re now encountering a “golden sunset” for those Mainframe Baby Boomers, just like myself, that will retire in the next decade or so.  Recently I was talking with a large IBM Mainframe customer, who stated “we’re going to lose 1500 years of IBM Mainframe experience in the next 10 years, how can you replace that resource easily”?  Let’s just think about that metric; ~50 people with an average of ~30 years’ experience, but of course, they will all retire in a short time frame!  You must draw your own conclusions as to that conundrum, how do you replace that level of experience?

In conclusion, no matter what IBM deliver from an IBM system z viewpoint, there is no substitute for experience and skill and no company, especially IBM has an answer to skills provision.  In the last 10-20 years, Outsourcing or Managed Services has provided an alternative approach for some companies, but even this option has finite resource.  If we consider the CFO viewpoint, where the bottom line is the only true financial metric, it’s easy to envisage a situation where many companies consider an alternative to the IBM Mainframe platform, both from a cost and viability viewpoint.  As a lifelong IBM Mainframe champion and as previously stated, there will always be a solution for safeguarding the longevity and viability of the IBM mainframe for any Medium to Large sized business.  However, now is the time to act, embrace the new Open Source, DevOps and Hybrid Cloud opportunities, to transition from a Baby Boomer to Millennial Mainframe workforce!

Is there an alternative approach and what is the Software Defined Mainframe (SDM)?

Put simply, SDM is a technology from LzLabs enabling the migration of mission-critical workloads from legacy IBM Mainframe environments to x86 Linux platforms.  Put another way, LzLabs have developed a managed software container that provides enterprises with a viable way to lift and shift applications from IBM Mainframes into Red Hat Linux or Cloud environments.  From my first glance, the primary keyword here is container; there was a time where the term container might have been foreign to the System z Mainframe, but with LinuxONE and zVM, Docker and KVM are now commonplace and accepted functions.  The primary considerations for any platform migration would include:

  • Seamless Migration: The LzLabs Software Defined Mainframe (SDM) ensures the key capabilities of screen handling, transaction management, recovery and concurrency are preserved without changes to the applications. LzOnline is capable of processing thousands of online customer transactions per second using commercial off-the-shelf hardware.
  • Major Subsystem Compatibility: The LzLabs Software Defined Mainframe (SDM) safeguards 100% compatibility with existing job control syntax, and also enables job submission via network connected nodes that support conventional job entry protocols. LzBatch provides a full spool capability that enables output to be managed and routed in familiar ways. Use of conventional job submission models, with standard job control, also means existing batch scheduling can operate with minimal changes.  Other solutions include LzRelational for Relational Database Management System (RDBMS) support and LzSecure, an authentication and authorization subsystem using security rules migrated from the incumbent IBM Mainframe platform.
  • Application Code Stability: An innovative approach that avoids the requirement to recompile or rewrite legacy COBOL or PLI application source code. Leveraging from functionality delivered by Cobol-IT and Eranea, a simple and straightforward process to convert and potentially modernize existing application source code to Java.

The realm of possibility exists and there are likely to be a number of existing IBM Mainframe users that find themselves with challenges, whether retiring workforce or back level application code based.  The Software Defined Mainframe (SDM) solution provides them with a potential option of simplifying a transition process, with seemingly minimal risk, while eradicating any significant dependence on another Distributed Systems platform supplier, during the arduous application source and data migration process.

From my viewpoint, I hope that this innovative LzLabs approach is a wake-up call for IBM themselves, who continue to deliver a strategic Enterprise Class System z platform, with all of its long term challenges, primarily cost based and the intricate and over complicated sub-capacity software pricing structure.  Without doubt, any new workload can easily be accommodated for low cost via the recent LinuxONE offering, but somewhere along the line, IBM perhaps overlooked a number of Small to Medium sized customers, who once might have used entry level or plug-compatible platforms, including and not limited to S/390 Integrated Server, MP3000, FLEX-ES zFrame, T3 Liberty, et al.  Equally from a dispassionate viewpoint, I welcome the competition of the LzLabs Software Defined Mainframe (SDM) offering and I would encourage all CIO and indeed other CxO personnel to consider the merits of this solution.

z/VM: The Most Flexible System z Operating System?

When considering IBM System z Operating Systems, typically z/OS is considered to be the flagship product, delivering best-of-breed features, including but not limited to, performance, reliability, availability, security, capacity, et al.  Therefore it easy to overlook the flexible virtualization capabilities of z/VM, delivering the architectural foundation for the increasingly attractive LinuxONE offering.  Quite simply, the fundamental strength of z/VM is an ability for hundreds if not thousands of virtual machines to share system resources with high levels of resource utilization.  The recent release of z/VM V6.4 provides even greater levels of scalability, security, resource optimization and efficiency to create opportunities for cost savings, while providing a robust foundation for cloud computing on z Systems servers.

Major technical highlights of z/VM 6.4 include:

  • Simultaneous MultiThreading (SMT) technology extends per-processor, core capacity growth beyond single-thread performance for Linux on z Systems running on an IBM Integrated Facility for Linux (IFL) specialty engine on a z13, z13s or LinuxONE server.
  • Enhanced Real & Guest Virtual Memory Support. The maximum amount of real storage supported by z/VM increases from 1 to 2 TB, whereas maximum supported virtual memory for a single guest remains at 1 TB.  Maintaining the virtual to real memory allocation, doubling the real memory used, results in doubling the active virtual memory that can be used effectively.  This virtual memory can be sourced from an increased number of virtual machines and/or larger virtual machines, delivering greater leverage of white space.
  • Surplus CPU Power Distribution Improvement. Virtual machines not utilizing all of their entitled CPU power, determined by their share setting, generate “surplus CPU power.”  This surplus CPU resource can be distributed to other virtual machines in proportion to their share settings, managed independently across virtual machines for each processor type, namely General Purpose (GP), zIIP, IFL, et al.
  • Guest Large Page Support. z/VM 6.4 now includes support for the Enhanced Dynamic Address Translation (DAT), allowing a guest machine to exploit large (1 MB) pages.  Larger page sizes decrease the amount of guest memory needed for DAT tables, therefore decreasing the overhead required to perform address translation.  In all cases, guest memory is mapped into 4 KB pages at the host level.

From a Linux environment viewpoint, z/VM V6.4 is a supported environment using IBM Dynamic Partition Manager for Linux-only systems with SCSI storage.  This simplifies system administration tasks for a more positive experience by those with limited System z Mainframe administration skills.  IBM Wave Version 1 Release 2 is now included in z/VM V6.4 as a priced feature, simplifying the task of administering a z/VM environment.  Using Dynamic Partition Manager, an inexperienced z/VM technician can create a z/VM partition in ~10 Minutes!

Supporting today’s agile application development and hybrid cloud implementations, z/VM and LinuxONE virtual servers can be natively managed using OpenStack open cloud architecture-based interfaces IBM OpenStack for z Systems.  OpenStack is an Infrastructure as-a Service (IaaS) cloud computing open source project, managed by the OpenStack Foundation.  With the adoption of OpenStack as part of the IBM cloud strategy, z/VM drivers provide OpenStack enablement for z/VM virtual machines running Linux on z Systems and LinuxONE.  Open standards such as OpenStack enable enterprises to be more agile, resolving potential issues such as vendor lock-in, technical expert recruitment, long application development cycles and security challenges.

The next evolution of z/VM cloud enablement technology is the OpenStack Liberty based Cloud Management Appliance (CMA), available for z/VM 6.3 and 6.4.  z/VM installations wanting to deploy cloud based solutions beyond Cloud Manager with OpenStack for z Systems, should utilize the cloud enablement support provided by the z/VM OpenStack Liberty based CMA.  This OpenStack Liberty based Cloud Management Appliance (CMA) replaces the IBM Cloud Manager with OpenStack for System z solution, withdrawn from marketing in June 2016.

The z/VM hypervisor extends the capabilities of z Systems and LinuxONE environments from the standpoint of sharing hardware assets, virtualization facilities and communication resources.  In conjunction with IBM Wave, z/VM makes it easier to derive maximum value from largescale virtual server hosting on z Systems and LinuxONE.  These benefits includes software and personnel savings, operational efficiency, power savings and optimal qualities of service.  The z/VM virtualization technology is designed to enable organizations to run hundreds to thousands of Linux servers on a single System z Mainframe footprint, alongside other System z Operating Systems, such as z/OS, z/VSE, or as a large-scale enterprise LinuxONE server solution.

Advanced virtualization features like multisystem virtualization and live guest relocation with z Systems, LinuxONE, z/VM, and Linux on z Systems or LinuxONE help to provide an efficient infrastructure for deploying private clouds to support workloads that scale both horizontally and vertically at a low total cost of ownership.

Although some might consider z/OS to be the flagship IBM system z Mainframe Operating System, arguably z/VM is the industry standard for optimal resource virtualization for numerous Operating System deployments.

IBM Doc Buddy: System z Mobile Problem Diagnosis

 

Having worked with the IBM Mainframe over the last several decades or more, I have always found a need for quick access to error messages, for obvious reasons.  In the 1980’s, I would have a paper copy of the “most common” MVS messages I was likely to encounter.  In the 1990’s, the adoption of optical media and the introduction of BookManager allowed the transport of many more messages, for numerous products on CD-ROM.  With the advent of higher speed Broadband, Wi-Fi and Mobile networks, I graduated to accessing BookManager on-line and eventually using the Mobile edition of LookAt.  So, isn’t it time for an IBM documentation app?

In August 2016, IBM introduced Doc Buddy, a no charge mobile application that enables retrieving z Systems message documentation and provides the following values:

  • Enables looking up message documentation without Internet connections after the initial download
  • Improves your information experience
  • Accelerates the time you spend in resolving problems
  • Includes links to the relevant product Support Portals and supports calling a contact from the app

IBM Doc Buddy, provides the message documentation of the products including z/OS, z/VM, TPF, DB2, CICS, IMS, ISPF, Tivoli OMEGAMON XE for Messaging for z/OS, IBM Service Management Unite, IBM Operations Analytics of z Systems, InfoSphere, et al.

Obviously to make this app local, you need to download the relevant manuals to your Mobile device and so this might generate storage capacity considerations.  However, once downloaded, this is a great tool for quick access to error messages.  There will be times where you can get a mobile signal to take a call, but no or limited access to mobile data or Wi-Fi services.

I have used this app on both iOS and Android and it works great.  At the time I downloaded this app, there were less than 100 downloads on both Apple and Google platforms.  Therefore, if you ever need to access System z error messages, give this app a go, as IBM have dropped support for LookAt.  It’s an awful lot easier than accessing paper manuals of firing up your PC to access a CD-ROM!