The Open Systems Adapter (OSA): Delivering ~25 Years IBM Mainframe IP Connectivity

Social Media Sharing

Recently in my day-to-day activities I encountered a 3172 controller and was reminded of my first such encounter, back in 1992.  This got me thinking; 25 years of IBM Mainframe IP connectivity!  The IBM 3172 Interconnect Controller allowed LAN-to-Mainframe interconnection and was the pioneering technology allowing IP data off-load activities.  Historically Mainframe data transfer operations, namely CCW I/O were dependant on a physical channel, where the 3172 was a stepping stone to the Open Systems Adapter (OSA) card in 1994, quickly superseded by the OSA-2 card in 1995.  From a performance viewpoint, the OSA/OSA-2 cards matched maximum ESCON speeds of 17 MB/S.

However, the introduction of the OSA-Express technology in 1999 dramatically increased throughput to ~ 333 MB/S.  The OSA-Express technology bypasses CCW channel-based I/O processing, connecting directly to the Self-Timed Inter-connect (STI) bus of Generation 6 (Retrofit to Generation 5) S/390 Mainframes.  Data is transferred directly to or from the high speed Mainframe memory OSA-Express adapter I/O port, across the STI bus, with no intervening components or processing to slow down the data interchange.  This bus-based I/O, a first for IBM Mainframe computing, significantly increases data transfer speeds, eliminating inefficiencies associated with intermediary components.

Additionally, IBM developed a totally new I/O scheme for the OSA-Express adapter. Queued Direct I/O (QDIO) is a highly optimized data queuing-based data interchange mechanism, leveraging from the message queuing expertise IBM acquired with their multi-platform MQSeries middleware solution.  The QDIO-specific S/390 hardware instruction for G5/G6 machines, delivered an application to-OSA signalling scheme capable of handling the high-volume, multimedia data transfer requirements of 21st Century web applications.  Where might we be without the 3172 Interconnect Controller and the MQSeries messaging solution?

Since OSA-Express2 the channel types supported have largely remain unchanged:

  • OSD: Queued Direct I/O (QDIO), a highly efficient data transfer architecture, dramatically improving IP data transfer speed and efficiency.
  • OSE: Non-QDIO, sets the OSA-Express card to function in non-QDIO mode, bypassing all of the advanced QDIO functions.
  • OSC: OSA-ICC, available with IBM Mainframes supporting GbE, eliminating the requirement for an external console controller, simplifying HMC and to the z/OS system console access, while introducing TN3270E connectivity.
  • OSN: OSA for NCP, Open Systems Adapter for NCP, eradicates 3745/3746 Front End Processor Network Control Program (NCP) running under IBM Communication Controller for Linux (CCL) requirements.  Superseded by:
  • OSM: (OSA-Express for zManager), provides Intranode Management Network (INMN) connectivity from System z to zManager functions.
  • OSX: (OSA-Express for zBX), provides connectivity and access control to the IntraEnsemble Data Network (IEDN) to the Unified Resource Manager (URM) function.

Returning to my original observation, it’s sometimes hard to reconcile finding a ~25 year old 3172 Controller in a Data Centre environment, preparing for a z14 upgrade!  In conjunction with the z14 announcement, OSA-Express6S promised an Ethernet technology refresh for use in the PCIe I/O drawer and continues to be supported by the 16 GBps PCIe Gen3 host bus.  The 1000BASE-T Ethernet feature supporting copper connectivity, in addition to 10 Gigabit Ethernet (10 GbE) and Gigabit Ethernet (GbE) for single-mode and multi-mode fibre optic environments.  The OSA-Express6S 1000BASE-T feature will be the last generation to support 100 Mbps link speed connections.  Future OSA-Express 1000BASE-T features will only support 1 Gbps link speed operation.

Of course, OSA-Express technology exposes the IBM Z Mainframe to the same security challenges as any other server node on the IP network, and as well as talking about Pervasive Encryption with this customer, we also talked about the increased security features of the OSA-Express6S adapter:

  • OSA-ICC Support for Secure Sockets Layer: when configured as an integrated console controller CHPID type (OSC) on the z14, supports the configuration and enablement of secure connections using the Transport Layer Security (TLS) protocol versions 1.0, 1.1 and 1.2. Server-side authentication is supported using either a self-signed certificate or a customer supplied certificate, which can be signed by a customer-specified certificate authority.  The certificates used must have an RSA key length of 2048 bits, and must be signed by using SHA-256.  This function support negotiates a cipher suite of AES-128 for the session key.
  • Virtual Local Area Network (VLAN): takes advantage of the Institute of Electrical and Electronics Engineers (IEEE) 802.q standard for virtual bridged LANs. VLANs allow easier administration of logical groups of stations that communicate as though they were on the same LAN.  In the virtualized environment of the IBM Z server, TCP/IP stacks can exist, potentially sharing OSA-Express features.  VLAN provides a greater degree of isolation by allowing contact with a server from only the set of stations that comprise the VLAN.
  • QDIO Data Connection Isolation: provides a mechanism for security regulatory compliance (E.g. HIPPA) for network isolation between the instances that share physical network connectivity, as per installation defined security zone boundaries. A mechanism to isolate a QDIO data connection on an OSA port, by forcing traffic to flow to the external network.  This feature safeguards that all communication flows only between an operating system and the external network.  This feature is provided with a granularity of implementation flexibility for both the z/VM and z/OS operating systems.

As always, the single-footprint capability of an IBM Z server must be considered. From a base architectural OSA design viewpoint, OSA supports 640 TCP/IP stacks or connections per dedicated CHPID, or 640 total stacks across multiple LPARs using a shared or spanned CHPID.  Obviously this allows the IBM Mainframe user to support more Linux images.  Of course, this is a very important consideration when considering the latest z13 and z14 servers for Distributed Systems workload consolidation.

In conclusion, never under estimate the value of the OSA-Express adapter in your organization and its role in transitioning the IBM Mainframe from a closed proprietary environment in the early 1990’s, to just another node on the IP network, from the mid-1990’s to the present day.  As per any other major technology for the IBM Z server, the OSA-Express adapter has evolved to provide the requisite capacity, performance, resilience and security attributes expected for an Enterprise Class workload.  Finally, let’s not lose sight of the technology commonality associated with OSA-Express and Crypto Express adapters; clearly, fundamental building blocks of Pervasive Encryption…

Optimizing Mission Critical Data Value – IBM Machine Learning for z/OS

Social Media Sharing

Typically the IBM Z Mainframe is recognized as the de facto System Of Record (SOR) for storing Mission Critical data.  It therefore follows for generic business applications, DB2, IMS (DB) and even VSAM could be considered as database servers, while CICS and IMS (DC) are transaction servers.  Extracting value from the Mission Critical data source has always been desirable, initially transferring this valuable Mainframe data source to a Distributed Platform via ETL (Extract, Transform, Load) processes.  A whole new software and hardware ecosystem was born for these processes, typically classified as data warehousing.  This process has proved valuable for the last 20 years or so, but more recently the IT industry has evolved, embracing Artificial Intelligence (AI) technologies, ultimately generating Machine Learning capabilities.

For some, it’s important to differentiate between Artificial Intelligence and Machine Learning, so here goes!  Artificial Intelligence is an explicit Computer Science activity, endeavouring to build machines capable of intelligent behaviour.  Machine Learning is a process of evolving computing platforms to act from data patterns, without being explicitly programmed.  In the “what came first world, the chicken or the egg”?  You need AI scientists and engineers to build the smart computing platforms, but you need data scientists or pseudo machine learning experts to make these new computing platforms intelligent.

Conceptually, Machine Learning could be classified as:

  • An automated and seamless learning ability, without being explicitly programmed
  • The ability to grow, change, evolve and adapt when encountering new data
  • An ability to deliver personalized and optimized outcomes from data analysed

When considering this Machine Learning ability with the traditional ETL model, eliminating the need to move data sources from one platform to another, eradicates the “point in time” data timestamp of such a model, and any associated security exposure of the data transfer process.  Therefore, returning to the IBM Z Mainframe being the de facto System Of Record (SOR) for storing Mission Critical data, it’s imperative that the IBM Z Mainframe server delivers its own Machine Learning ability…

IBM Machine Learning for z/OS is an enterprise class machine learning platform solution, assisting the user to create, train and deploy machine learning models, extracting value from your mission critical data on IBM Z platforms, retaining the data in situ, within the IBM Z complex.

Machine Learning for z/OS integrates several IBM machine learning capabilities, including IBM z/OS Platform for Apache Spark.  It simplifies and automates the machine learning workflow, enabling collaboration on machine learning projects across personal and disciplines (E.g. Data Scientists, Business Analysts, Application Developers, et al).  Retaining your Mission Critical data in situ, on your IBM Z platforms, Machine Learning for z/OS significantly reduces the cost, complexity security risk and time for Machine Learning model creation, training and deployment.

Simplistically there are two categories of Machine Learning:

  • Supervised: A model is trained from a known set of data sources, with a target output in mind. In mathematical terms, a formulaic approach.
  • Unsupervised: There is no input or output structure and unsupervised machine learning is required to formulate results from evolving data patterns.

In theory, we have been executing supervised machine learning for some time, but unsupervised is the utopia.

Essentially Machine Learning for z/OS comprises the following functions:

  • Data ingestion (From SOR data sources, DB2, IMS, VSAM)
  • Data preparation
  • Data training and validation
  • Data evaluation
  • Data analysis deployment (predict, score, act)
  • Ongoing learning (monitor, ingestion, feedback)

For these various Machine Learning functions, several technology components are required:

  • z/OS components on z/OS (MLz scoring service, various SPARK ML libraries and CADS/HPO library)
  • Linux/x86 components (Docker images for Repository, Deployment, Training, Ingestion, Authentication and Metadata, services)

The Machine Learning for z/OS solution incorporates the following added features:

  • CADS: Cognitive Assistant for Data Scientist (helps select the best fit algorithm for training)
  • HPO: Hyper Parameter Optimization (provides the Data Scientist with optimal parameters)
  • Brunel Visualization Tool (assist the Data Scientist to understand data distribution)

Machine Learning for z/OS provides a simple framework to manage the entire machine learning workflow.  Key functions are delivered through intuitive web based GUI, a RESTful API and other programming APIs:

  • Ingest data from various sources including DB2, IMS, VSAM or Distributed Systems data sources.
  • Transform and cleanse data for algorithm input.
  • Train a model for the selected algorithm with the prepared data.
  • Evaluate the results of the trained model.
  • Intelligent and automated algorithm/model selection/model parameter optimization based on IBM Watson Cognitive Assistant for Data Science (CADS) and Hyper Parameter Optimization (HPO) technology.
  • Model management.
  • Optimized model development and Production.
  • RESTful API provision allowing Application Development to embed the prediction using the model.
  • Model status, accuracy and resource consumption monitoring.
  • An intuitive GUI wizard allowing users to easily train, evaluate and deploy a model.
  • z Systems authorization and authentication security.

In conclusion, the Machine Learning for z/OS solution delivers the requisite framework for the emerging Data Scientists to collaborate with their Business Analysts and Application Developer colleagues for delivering new business opportunities, with smarter outcomes, while lowering risk and associated costs.

The Ever Changing IBM Z Mainframe Disaster Recovery Requirement

Social Media Sharing

With a 50+ year longevity, of course the IBM Z Mainframe Disaster Recovery (DR) requirement and associated processes have changed and evolved accordingly.  Initially, the primary focus would have been HDA (Head Disk Assembly) related, recovering data due to hardware (E.g. 23nn, 33nn DASD) failures.  It seems incredulous in the 21st Century to consider the downtime and data loss with such an event, but these failures were commonplace into the early 1980’s.  Disk drive (DASD) reliability increased with the 3380 device in the 1980’s and the introduction of the 3990-03 Dual Copy capability in the late 1980’s eradicated the potential consequences of a physical HDA failure.

The significant cost of storage and CPU resources dictated that many organizations had to rely upon 3rd party service providers for DR resource provision.  Often this dictated a classification of business applications, differentiating between Mission Critical or not, where DR backup and recovery processes would be application based.  Even the largest of organizations that could afford to duplicate CPU resource, would have to rely upon the Ford Transit Access Method (FTAM), shipping physical tape from one location to another and performing proactive or more likely reactive data restore activities.  A modicum of database log-shipping over SNA networks automated this process for Mission Critical data, but successful DR provision was still a major consideration.

Even with the Dual Copy function, this meant DASD storage resources had to be doubled for contingency purposes.  Therefore this dictated only the upper echelons of the business world (I.E. Financial Organizations, Telecommunications Suppliers, Airlines, Etc.) could afford the duplication of investment required for self-sufficient DR capability.  Put simply, a duplication of IBM Mainframe CPU, Network and Storage resources was required…

The 1990’s heralded a significant evolution in generic IT technology, including IBM Mainframe.  The adoption of RAID technology for IBM Mainframe Count Key Data (CKD) provided an affordable solution for all IBM Mainframe users, where RAID-5(+) implementations became commonplace.  The emergence of ESCON/FICON channel connectivity provided the extended distance requirement to complement the emerging Parallel SYSPLEX technology, allowing IBM Mainframe servers and related storage to be geographically dispersed.  This allowed a greater number of IBM Mainframe customers to provision their own in-house DR capability, but many still relied upon physical tape shipment to a 3rd party DR services provider.

The final significant storage technology evolution was the Virtual Tape Library (VTL) structure, introduced in the mid-1990’s.  This technology simplified capacity optimization for physical tape media, while reducing the number of physical drives required to satisfy the tape workload.  These VTL structures would also benefit from SYSPLEX implementations, but for many IBM Mainframe users, physical tape shipment might still be required.  Even though the IBM Mainframe had supported IP connectivity since the early 1990’s, using this network capability to ship significant amounts of data was dependent upon public network infrastructures becoming faster and more affordable.  In the mid-2000’s, transporting IBM Mainframe backup data via extended network carriers, beyond the limit of FICON technologies became more commonplace, once again, changing the face of DR approaches.

More recently, the need for Grid configurations of 2, 3 or more locations has become the utopia for the Global 1000 type business organization.  Numerous copies of synchronized Mission Critical if not all IBM Z Mainframe data are now maintained, reducing the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) DR criteria to several Minutes or less.

As with anything in life, learning from the lessons of history is always a good thing and for each and every high profile IBM Z Mainframe user (E.g. 5000+ MSU), there are many more smaller users, who face the same DR challenges.  Just as various technology races (E.g. Space, Motor Sport, Energy, et al) eventually deliver affordable benefit to a wider population, the same applies for the IBM Z Mainframe community.  The commonality is the challenges faced, where over the years, DR focus has either been application or entire business based, influenced by the technologies available to the IBM Mainframe user, typically dictated by cost.  However, the recent digital data explosion generates a common challenge for all IT users alike, whether large or small.  Quite simply, to remain competitive and generate new business opportunities from that priceless and unique resource, namely business data, organizations must embrace the DevOps philosophy.

Let’s consider the frequency of performing DR tests.  If you’re a smaller IBM Z Mainframe user, relying upon a 3rd party DR service provider, your DR test frequency might be 1-2 tests per year.  Conversely if you’re a large IBM z Mainframe user, deploying a Grid configuration, you might consider that your business no longer has the requirement for periodic DR tests?  This would be a dangerous thought pattern, because it was forever thus, SYSPLEX and Grid configurations only safeguard from physical hardware scenarios, whereas a logical error will proliferate throughout all data copies, whether, 2, 3 or more…

Similarly, when considering the frequency of Business Application changes, for the archetypal IBM Z Mainframe user, this might have been Monthly or Quarterly, perhaps with imposed change freezes due to significant seasonal or business peaks.  However, in an IT ecosystem where the IBM Z Mainframe is just another interconnected node on the network, the requirement for a significantly increased frequency of Business Application changes arguably becomes mandatory.  Therefore, once again, if we consider our frequency of DR tests, how many per year do we perform?  In all likelihood, this becomes the wrong question!  A better statement might be, “we perform an automated DR test as part of our Business Application changes”.  In theory, the adoption of DevOps either increases the frequency of scheduled Business Application changes, or organization embraces an “on demand” type approach…

We must then consider which IT Group performs the DR test?  In theory, it’s many groups, dictated by their technical expertise, whether Server, Storage, Network, Database, Transaction or Operations based.  Once again, if embracing DevOps, the Application Development teams need to be able to write and test code, while the Operations teams need to implement and manage the associated business services.  In such a model, there has to be a fundamental mind change, where technical Subject Matter Experts (SME) design and implement technical processes, which simplify the activities associated with DevOps.  From a DR viewpoint, this dictates that the DevOps process should facilitate a robust DR test, for each and every Business Application change.  Whether an organization is the largest or smallest of IBM Z Mainframe user is somewhat arbitrary, performing an entire system-wide DR test for an isolated Business Application change is not required.  Conversely, performing a meaningful Business Application test during the DevOps code test and acceptance process makes perfect sense.

Performing a meaningful Business Application DR test as part of the DevOps process is a consistent requirement, whether an organization is the largest or smallest IBM Z Mainframe user.  Although their hardware resource might differ significantly, where the largest IBM Z Mainframe user would typically deploy a high-end VTL (I.E. IBM TS77n0, EMC DLm 8n00, Oracle VSM, et al), the requirement to perform a seamless, agile and timely Business Application DR test remains the same.

If we recognize that the IBM Z Mainframe is typically deployed as the System Of Record (SOR) data server, today’s 21st century Business Application incorporates interoperability with Distributed Systems (E.g. Wintel, UNIX, Linux, et al) platforms.  In theory, this is a consideration, as mostly, IBM Z Mainframe data resides in proprietary 3390 DASD subsystems, while Distributed Systems data typically resides in IP (NFS, NAS) and/or FC (SAN) filesystems.  However, the IBM Z Mainframe has leveraged from Distributed Systems technology advancements, where typical VTL Grid configurations utilize proprietary IP connected disk arrays for VTL data.  Ultimately a VTL structure will contain the “just in case” copy of Business Application backup data, the very data copy required for a meaningful DR test.  Wouldn’t it be advantageous if the IBM Z Mainframe backup resided on the same IP or FC Disk Array as Distributed Systems backups?

Ultimately the high-end VTL (I.E. IBM TS77n0, EMC DLm 8n00, Oracle VSM, et al) solutions are designed for the upper echelons of the business and IBM Z Mainframe world.  Their capacity, performance and resilience capability is significant, and by definition, so is the associated cost.  How easy or difficult might it be to perform a seamless, agile and timely Business Application DR test via such a high-end VTL?  Are there alternative options that any IBM Z Mainframe user can consider, regardless of their size, whether large or small?

The advances in FICON connectivity, x86/POWER servers and Distributed Systems disk arrays has allowed for such technologies to be packaged in a cost efficient and small footprint IBM Z VTL appliance.  Their ability to connect to the IBM Z server via FICON connectivity, provide full IBM Z tape emulation and connect to ubiquitous IP and FC Distributed Systems disk arrays, positions them for strategic use by any IBM Z Mainframe user for DevOps DR testing.  Primarily one consistent copy of enterprise wide Business Application data would reside on the same disk array, simplifying the process of recovering Point-In-Time backup data for DR testing.

On the one hand, for the smaller IBM Z user, such an IBM Z VTL appliance (E.g. Optica zVT) could for the first time, allow them to simplify their DR processes with a 3rd party DR supplier.  They could electronically vault their IBM Z Mainframe backup data to their 3rd party DR supplier and activate a totally automated DR invocation, as and when required.  On the other hand, moreover for DevOps processes, the provision of an isolated LPAR, would allow the smaller IBM Z Mainframe user to perform a meaningful Business Application DR test, in-house, without impacting Production services.  Once again, simplifying the Business Application DR test process applies to the largest of IBM Z Mainframe users, and leveraging from such an IBM Z VTL appliance, would simplify things, without impacting their Grid configuration supporting their Mission critical workloads.

In conclusion, there has always been commonality in DR processes for the smallest and largest of IBM Z Mainframe users, where the only tangible difference would have been budget related, where the largest IBM Z Mainframe user could and in fact needed to invest in the latest and greatest.  As always, sometimes there are requirements that apply to all, regardless of size and budget.  Seemingly DevOps is such a requirement, and the need to perform on-demand seamless, agile and timely Business Application DR tests is mandatory for all.  From an enterprise wide viewpoint, perhaps a modicum of investment in an affordable IBM Z VTL appliance might be the last time an IBM Z Mainframe user needs to revisit their DR testing processes!

IBM Z Server: Best In Class For Availability – Does Form Factor Matter?

Social Media Sharing

A recent ITIC 2017 Global Server Hardware and Server OS Reliability Survey classified the IBM Z server as delivering the highest levels of reliability/uptime, delivering ~8 Seconds or less of unplanned downtime per month.  This was the 9th consecutive year that such a statistic had been recorded for the IBM Z Mainframe platform.  This compares to ~3 Minutes of unplanned downtime per month for several other specialized server technologies, including IBM POWER, Cisco UCS and HP Integrity Superdome via the Linux Operating System.  Clearly, unplanned server downtime is undesirable and costly, impacting the bottom line of the business.  Industry Analysts state that ~80% of global business require 99.99% uptime, equating to ~52.5 Minutes downtime per year or ~8.66 Seconds per day.  In theory, only the IBM Z Mainframe platform exceeds this availability requirement, while IBM POWER, Cisco UCS and HP Integrity Superdome deliver borderline 99.99% availability capability.  The IBM Mainframe is classified as a mission-critical resource in 92 of the top 100 global banks, 23 of the top 25 USA based retailers, all 10 of the top 10 global insurance companies and 23 of the top 25 largest airlines globally…

The requirement for ever increasing amounts of corporate compute power is without doubt, satisfying the processing of ever increasing amounts of data, created from digital sources, including Cloud, Mobile and Social, requiring near real-time analytics to deliver meaningful information from these oceans of data.  Some organizations select x86 server technology to deliver this computing power requirement, either in their own Data Centre or via a 3rd party Cloud Provider.  However, with unplanned downtime characteristics that don’t meet the seeming de facto 99.99% uptime availability metric, can the growth in x86 server technology continue?  From many perspectives, Reliability, Availability & Serviceability (RAS), Data Security via Pervasive Encryption and best-in-class Performance and Scalability, you might think that the IBM Z Mainframe would be the platform of choice?  For whatever reason, this is not always the case!  Maybe we need to look at recent developments and trends in the compute power delivery market and second guess what might happen in the future…

Significant Cloud providers deliver vast amounts of computing power and associated resources, evolving their business models accordingly.  Such business models have many challenges, primarily uptime and data security related, convincing their prospective customers to migrate their workloads from traditional internal Data Centres, into these massive rack provisioned infrastructures.  Recently Google has evolved from using Intel as its primary supplier for Data Centre CPU chips, including CPU chips from IBM and other semiconductor rivals.

In April 2016, Google declared it had ported its online services to the IBM POWER CPU chip and that its toolchain could output code for Intel x86, IBM POWER and 64-bit ARM cores at the flip of a command-line switch.  As part of the OpenPOWER and Open Compute Project (OCP) initiatives, Google, IBM and Rackspace are collaborating to develop an open server specification based on the IBM POWER9 architecture.  The OCP Rack & Power Project will dictate the size and shape or form factor for housing these industry standard rack infrastructures.  What does this mean for the IBM Z server form factor?

Traditionally and over the last decade or more, IBM has utilized the 24 Inch rack form factor for the IBM Z Mainframe and Enterprise Class POWER Systems.  Of course, this is a different form factor to the industry standard 19 Inch rack, which finally became the de facto standard for the ubiquitous blade server.  Unfortunately there was no tangible standard for a 19 Inch rack, generating power, cooling and other issues.  Hence the evolution of the OCP Rack & Power Standard, codenamed Open Rack.  Google and Facebook have recently collaborated to evolve the Open Rack Standard V2.0, based upon an external 21 Inch rack Form factor, accommodating the de facto 19 Inch rack mounted equipment.

How do these recent developments influence the IBM Z platform?  If you’re the ubiquitous global CIO, knowing your organizations requires 99.99%+ uptime, delivering continuous business application change via DevOps, safeguarding corporate data with intelligent and system wide encryption, perhaps you still view the IBM Z Mainframe as a proprietary server with its own form factor?

As IBM have already demonstrated with their OpenPOWER offering, collaborating with Google and Rackspace, their 24 Inch rack approach can be evolved, becoming just another CPU chip in a Cloud (E.g. IaaS, Paas) service provider environment.  Maybe the final evolution step for the IBM Z Mainframe is evolving its form factor to a ubiquitous 19 Inch rack format?  The intelligent and clearly defined approach of the Open Rack Standard makes sense and if IBM could deliver an IBM Z Server in such a format, it just becomes another CPU chip in the ubiquitous Cloud (E.g. IaaS, Paas) service provider environment.  This might be the final piece of the jigsaw for today’s CIO as their approach to procuring compute power might be based solely upon the uptime and data security metrics.  For those organizations requiring in excess of 99.99% uptime and fully compliant security, there only seems to be one choice, the IBM Z Mainframe CPU chip technology, which has been running Linux workloads since 2000!

IBM z14: Pervasive Encryption & Container Pricing

Social Media Sharing

On 17 July 2017 IBM announced the z14 server as “the next generation of the world’s most powerful transaction system, capable of running more than 12 billion encrypted transactions per day.  The new system also introduces a breakthrough encryption engine that, for the first time, makes it possible to pervasively encrypt data associated with any application, cloud service or database all the time”.

At first glance, a cursory review of the z14 announcement might just appear as another server upgrade release, but that could be a costly mistake by the reader.  There are always subtle nuisances in any technology announcement, while finding them and applying them to your own business can sometimes be a challenge.  In this particular instance, perhaps one might consider “Persuasive Encryption & Contained Pricing”…

When IBM releases a new generation of z Systems server, many of us look to the “feeds and speeds” data and ponder how that might influence our performance and capacity profiles.  IBM state the average z14 speed compared with a z13 increase by ~10% for 6-way servers and larger.  As per usual, there are software Technology Transition Offering (TTO) discounts ranging from 6% to 21% for z14 only sites.  However, in these times where workload profiles are rapidly changing and evolving, it’s sometimes easy to overlook that IBM have to consider the holistic position of the IBM Z world.  Quite simply, IBM has many divisions, Hardware, Software, Services, et al.  Therefore there has to be interaction between the hardware and software divisions and in this instance, IBM have delivered a z14 server that is security focussed, with their Pervasive Encryption functionality.

Pervasive Encryption provides a simple and transparent approach for z Systems security, enabling the highest levels of data encryption for all data usage scenarios, for example:

  • Processing: When retrieved from files and processed by applications
  • In Flight: When being transmitted over internal and external networks
  • At Rest: When stored in database structures or files
  • In Store: When stored in magnetic storage media

Pervasive Encryption simplifies and reduces costs associated when protecting data by policy (I.E. Subset) or En Masse (I.E. All Of The Data, All Of the Time), achieving compliance mandates.  When considering the EU GDPR (European Union General Data Protection Regulation) compliance mandate, companies must notify relevant parties within 72 hours of first having become aware of a personal data breach.  Additionally organizations can be fined up to 4% of annual global turnover or €20 Million (whichever is greater), for any GDPR breach unless they can demonstrate that data was encrypted and keys were protected.

To facilitate this new approach for encryption, the IBM z14 infrastructure incorporates several new capabilities integrated throughout the technology stack, including Hardware, Operating System and Middleware.  Integrated CPU chip cryptographic acceleration is enhanced, delivering ~600% increased performance when compared with its z13 predecessor and ~20 times faster than competitive server platforms.  File and data set encryption is optimized within the Operating Systems (I.E. z/OS), safeguarding transparent and optimized encryption, not impacting application functionality or performance.  Middleware software subsystems including DB2 and IMS leverage from these Pervasive Encryption techniques, safeguarding that High Availability databases can be transitioned to full encryption without stopping the database, application or subsystem.

Arguably IBM had to deliver this type of security functionality for its top tier z Systems customers, as inevitably they would be impacted by compliance mandates such as GDPR.  Conversely, the opportunity to address the majority of external hacking scenarios with one common approach is an attractive proposition.  However, as always, the devil is always in the detail, and given an impending deadline date of May 2018 for GDPR compliance, I wonder how many z Systems customers could implement the requisite z14 hardware and related Operating System (I.E. z/OS) and Subsystem (I.E. CICS, DB2, IMS, MQ, et al) .upgrades before this date?  From a bigger picture viewpoint, Pervasive Encryption does offer the requisite functionality to apply a generic end-to-end process for securing all data, especially Mission Critical data…

Previously we have considered the complexity of IBM z Systems pricing mechanisms and in theory, the z14 announcement tried to simplify some of these challenges by building upon and formalizing Container Pricing.  Container Pricing is intended to greatly simplify software pricing for qualified collocated workloads, whether collocated with other existing workloads on the same LPAR, deployed in a separate LPAR or across multiple LPARs.  Container pricing allows the specified workload to be separately priced based on a variety of metrics.  New approved z/OS workloads can be deployed collocated with other sub-capacity products (I.E. CICS, DB2, IMS, MQ, z/OS) without impacting cost profiles of existing workloads.

As per most new IBM z Systems pricing mechanisms of late, there is a commercial collaboration and exchange required between IBM and their customer.  Once a Container Pricing solution is agreed between IBM and their customer, for an agreed price, an IBM Sales order is initiated, triggering the creation of an Approved Solution ID.  The IBM provided solution ID is a 64-character string representing an approved workload with an entitled MSU capacity, representing a Full Capacity Pricing Container used for billing purposes.

Previously we considered the importance of WLM for managing z/OS workloads and its interaction with soft-capping, and this is reinforced with this latest IBM Container Pricing mechanism.  The z/OS Workload Manager (WLM) enables Container Pricing using a resource classified as the Tenant Resource Group (TRG), defining the workload in terms of address spaces and independent enclaves.  The TRG, combined with a unique Approved Solution ID, represents the IBM approved solution.  As per standard SCRT processing, workload instrumentation data is collected, safeguarding that this workload profile does not directly impact the traditional peak LPAR Rolling Four-Hour Average (R4HA).  The TRG also allows the workload to be metered and optionally capped, independent of other workloads that are running collocated in the LPAR.

MSU utilization of the defined workload is recorded by WLM and RMF, subsequently processed by SCRT to subtract the solution MSU capacity from the LPAR R4HA.  The solution can then be priced independently, based on MSU resource consumed by the workload, or based upon other non-MSU values, specifically a Business Value Metric (E.g. Number of Payments).  Therefore Container Pricing is much simpler and much more flexible than previous IBM collocated workload mechanism, namely IWP and zCAP.

Container Pricing eliminates the requirement to commission specific new environments to optimize MLC pricing.  By deploying a standard IBM process framework, new workloads can be commissioned without impacting the R4HA of collocated workloads, being deployed as per business requirements, whether on the same LPAR, a separate LPAR, or dispersed across multiple LPAR structures.  Quite simply, the standard IBM process framework is the Approved Solution ID, associating the client based z/OS system environment to the associated IBM sales contract.

In this first iteration release associated with the z14 announcement, Container Pricing can be deployed in the following three solution based scenarios:

  • Application Development and Test Solution: Add up to 3 times more capacity to existing Development and Test environments without any additional monthly licensing costs; or create new LPAR environments with competitive pricing.
  • New Application Solution: Add new z/OS microservices or applications, priced individually without impacting the cost of other workloads on the same system.
  • Payments Pricing Solution: A single agreed value based price for software plus hardware or just software, via a number of payments processed metric, based on IBM Financial Transaction Manager (FTM) software.

IBM state z14 support for a maximum 2 million Docker containers in an associated maximum 32 TB memory configuration.  In conjunction with other I/O enhancements, IBM state a z14 performance increase of ~300%, when compared with its z13 predecessor.  Historically the IBM Z platform was never envisaged as being the ideal container platform.  However, its ability to seamlessly support z/OS and Linux, while the majority of mission critical Systems Of Record (SOR) data resides on IBM Z platforms, might just be a compelling case for microservices to be processed on the IBM Z platform, minimizing any data latency transfer.

Container Pricing for z/OS is somewhat analogous to the IBM Cloud Managed Services on z Systems pricing model (I.E. CPU consumption based).  Therefore, if monthly R4HA peak processing is driven by an OLTP application, or any other workload for that matter, any additional unused capacity in that specific SCRT reporting month can be allocated for no cost to other workloads.  Therefore z/OS customers will be able to take advantage of this approach, processing collocated microservices or applications for a zero or nominal cost.

County Multiplex Pricing (CMP) Observation: The z14 is the first new generation of IBM Z hardware since the introduction of the CMP pricing mechanism.  When a client first implements a Multiplex, IBM Z server eligibility cannot be older than two generations (I.E. N-2) prior to the most recently available server (I.E. N).  Therefore the General Availability (GA) of z14, classifies the z114 and z196 servers as previously eligible CMP machines.  IBM will provide a 3 Month grace period for CMP transition activities for these N-3 servers, namely z114 and z196.  Quite simply, the first client CMP invoice must be submitted within 90 days of the z14 GA date, namely 13 September 2017, no later than 1 January 2018.

In conclusion, Pervasive Encryption is an omnipresent z14 function integrated into every data lifecycle stage, which could easily be classified as Persuasive Encryption, simplifying the sometimes arduous process of classifying and managing mission-critical data.  As cybersecurity becomes an omnipresent clear and present danger, associated with impending and increasingly punitive compliance mandates such as GDPR, the realm of possibility exists to resolve this high profile corporate challenge once and for all.

Likewise, Container Pricing provides a much needed simple-to-use framework to drive MSU cost optimization for new workloads and could easily be classified as Contained Pricing.  The committed IBM Mainframe customer will upgrade their z13 server environment to z14, as part of their periodic technology refresh approach.  Arguably, those Mainframe customers who have been somewhat hesitant in upgrading from older technology Mainframe servers, might just have a compelling reason to upgrade their environments to z14, safeguarding cybersecurity challenges and evolving processes to contain z/OS MLC costs.

z Systems Software & Applications Currency: MQ Continuous & as a Service Delivery Models

Social Media Sharing

In a rapidly-moving technology environment where DevOps is driving innovation for the rapid delivery of applications, are there any innovations for the related z Systems infrastructure (E.g. z/OS, CICS, DB2, IMS, MQ, WebSphere AS) that can deliver faster software and indeed firmware updates?

In April 2016, IBM announced MQ V9.0, delivering new and enhanced capabilities facilitating a Continuous Delivery and support model.  The traditional Long Term Support release offers the ubiquitous collection of aggregated fix-packs, applied to the delivered MQ V9.0 function.  The new Continuous Delivery release delivers both fixes and new functional enhancements as a set of modification-level updates, facilitating more rapid access to functional enhancements.

Form a terminology viewpoint, the Continuous Delivery (CD) support model, introduces new function and enhancements, made available by incremental updates within the same version and release.  Additionally, there will also be a Long Term Support (LTS) release available for deployments that require traditional medium-long term security and defect fixes only.  Some might classify such LTS fixes as Service Pack or Level Set patching.  The Continuous Delivery (CD) support approach delivers regular updates with a short-term periodic frequency for customers wanting to exploit the latest features and capabilities of MQ, without waiting for the next long term support release cycle.  In terms of timeframe, although there is no fixed time period associated with a CD or LTS release, typically CD is every few months, while LTS releases are every two years or so.  In actual IBM announcement terms, the latest MQ release was V9.0.3 in May 2017, meaning four MQ V9.0.n release activities in a ~13 Month period, approximately quarterly…

The benefits of this CD support model are obvious, for those organizations who consider themselves to be leading-edge or “amongst the first”, they can leverage from new function ASAP, with a modicum of confidence that the code has a good level of stability.  Those customers with a more cautious approach, can continue their ~2 year software upgrade cycle, applying the LTS release.  As always with software maintenance, there has never been a perfect approach.  Inevitably there will by High Impact or PERvasive (HIPER) and PTF-in Error (PE) PTF requirements, as software function stability is forever evolving.  Therefore, arguably those sites leveraging from the latest function have always been running from a Continuous Delivery software maintenance model; they just didn’t know when and how often!

Of all the major IBM z Systems subsystems to introduce this new software support model first, clearly the role of MQ dictates that for many reasons, primarily middleware and interoperability based, MQ needs a Continuous Delivery (CD) model.

At this stage, let’s remind ourselves of the important role that MQ plays in our IT infrastructures.  IBM MQ is a robust messaging middleware solution, simplifying and accelerating the integration of diverse applications and business data, typically distributed over multiple platforms and geographies.  MQ facilitates the assured, secure and reliable exchange of information between applications, systems, services, and files.  This exchange of information is achieved through the sending and receiving of message data through queues and topics, simplifying the creation and maintenance of business applications.  MQ delivers a universal messaging solution incorporating a broad set of offerings, satisfying enterprise requirements, in addition to providing 21st century connectivity for Mobile and the Internet of Things (IoT) devices.

Because of the centralized role that MQ plays, its pivotal role of interconnectivity might be hampered by the DevOps requirement of rapid application delivery, for both planned and unplanned business requirements.  Therefore even before the concept of MQ Continuous Delivery (CD) was announced in April 2016, there was already talk of MQ as a Service (MQaaS).

As per any major z Systems subsystem, traditionally IBM MQ was managed by a centralized messaging middleware team, collaborating with their Application, Database and Systems Management colleagues.  As per the DevOps methodology, this predictable and centralized approach, does no lend itself to rapid and agile Application Development.  Quite simply an environment management decentralization process is required, to satisfy the ever-increasing speed and diversity of application design and delivery requests.  By definition, MQ seamlessly interfaces with so many technologies, including but not limited to, Amazon Web Services, Docker, Google Cloud Platform, IBM Bluemix, JBoss, JRE, Microsoft Azure, Oracle Fusion Middleware, OpenStack, Salesforce, Spark, Ubuntu, et al.

The notional concept of MQ as a Service (MQaaS), delivers a capability to implement self-service portals, allowing Application Developers and their interconnected Line of Business (LOB) personnel to drive changes to the messaging ecosystem.  These changes might range from the creation or deletion of a messaging queue to the provision of a highly available and scalable topology for a new business application.  The DevOps and Application Lifecycle Management (ALM) philosophy dictates that the traditional centralized messaging middleware team must evolve, reducing human activity, by automating their best practices.  Therefore MQaaS can increase the speed at which the infrastructure team can deliver new MQ infrastructure to their Application Development community, while safeguarding the associated business requirements.

MQ provides a range of control commands and MQ Script Commands (MQSC) to support the creation and management of MQ resources using scripts.  Programmatic resource access is achievable via MQ Programmable Command Format (PCF) messages, once access to a queue manager has been established.  Therefore MQ administrators can create workflows that drive these processes, delivering a self-service interface.  Automation frameworks, such as UrbanCode Deploy (UCD), Chef and Puppet functions can be used to orchestrate administrative operations for MQ, to create and manage entire application or server environments.  Virtual machines, Docker containers, PureApplication System and the MQ Appliance itself can be used alongside automation frameworks to create a flexible and scalable ecosystem, for delivering the MQaaS infrastructure.

Integrating the MQ as a Service concept within your DevOps and Application Lifecycle Management process delivers the following benefits:

  • Development Agility: Devolving traditional MQ administration activities to Application and Line of Business personnel, allows them to directly provision or update the associated messaging resources. This optimizes the overall process, while DevOps processes facilitates the requisite IT organization communication.
  • Process Standardization: Enabling a self-service interface to Application and Line of Business personnel delivers a single entry point for messaging configuration changes. This common interface will leverage from consistent routines and workflows to deploy the necessary changes, enforcing standards and consistency.
  • Personnel Optimization: Self-service interfaces used by Application and Line of Business personnel allow them to focus on core application requirements, primarily messaging and Quality of Service (QoS) related. In such an environment, the background process of performing the change is arbitrary, timely change implementation is the most important factor.
  • Environment Interoperability: An intelligent and automated self-service interface allows for dynamic provisioning of systems and messaging resources for development and testing purposes. This automation can simplify the promotion of changes throughout the testing lifecycle (E.g. Development, Test, Quality Assurance, Production, et al).  As and when required, such automation can provide capacity-on-demand type changes, dynamically scaling an application, as and when required, to satisfy ever-changing and unpredictable business requirements.

In conclusion, DevOps is an all-encompassing framework and one must draw one’s own conclusions as to whether software update frequency timescales will reduce for major subsystems such as CICS, DB2, IMS and even the underlying Operating System itself, namely z/OS.  Conversely, the one major z Systems subsystem with so many interoperability touch points, namely MQ, is the obvious choice for applying DevOps techniques to underlying subsystem software.  For MQ, the use of a Continuous Delivery (CD) software support model safeguards that the latest new function and bug fix capability is delivered in a timely manner for those organizations striving for an agile environment.  Similarly, the consideration of devolving traditional MQ systems administration activities, via intelligent, automated and self-service processes to key Application and Line of Business personnel makes sense, evolving a pseudo MQaaS capability.

System z DevOps & Application Lifecycle Management (ALM) Integration: Evolution or Revolution?

Social Media Sharing

From an IT viewpoint, seemingly the 2010’s decade will be dominated by the digital data explosion, primarily fuelled by Cloud, Mobile and Social Media data sources, while intelligent and timely if not real-time Analytics are required to process this vast and ever-growing data source.  Who could have imagined just a decade ago that the Mobile Phone, specifically Smartphone would be the de facto computing device, although some might say, only for a certain age demographic?  I’m not so sure, I encounter real-life and day-to-day evidence that a Smartphone or tablet can also empower the older generation to simplify their computer usage and access.  From a business perspective, Smartphones have allowed geographically dispersed citizens gain access to Banking facilities for the first time; Cloud allows countless opportunities for data sharing and number crunching for collaborative scientific, health, education and anything else a human being might conceive activities.  The realm of opportunity exists…

When thinking of the bigger picture, we somehow have to find a workable and seamless balance that will integrate the dawn of business computing from the 1960’s to these rapidly moving 21st Century requirements.  When considering which came first, the data or the application, I always think the answer is really simple; the data came first, but I have been wrong before!  What is without doubt, the initial requirement for a business application was to automate data processing and the associated medium-term waterfall (E.g. n-nn Months) application development process is now outdated.  As of 2017, today’s application needs to leverage from this vast and rich digital data source, to identify and leverage new business opportunities, increasingly unplanned and therefore rapid application delivery is required.  For example, previously I wrote about this subject matter in the zAPI: System z Deployment Into The API Economy blog entry.

From an IT perspective, one of the greatest achievements in the 21st Century is collaboration, whether technology based, leveraging from a truly interconnected (E.g. Internet Protocol/IP) heterogeneous computing environment, or personnel based, with IT teams collaborating in a more open and timely manner, primarily via DevOps.  This might be a better chicken and egg analogy; which came first, the data explosion or an IT ecosystem that allowed such a digital data explosion?

There are a plethora of modern-day application development tools that separate the underlying target deployment server from the actual application developer.  Put another way, today’s application developer ideally works from a GUI display via an Eclipse-based Integrated Development Environment (IDE) interface, creating code using rapid and agile development techniques.  From an IBM System z perspective, these platforms include Compuware Topaz Workbench, IBM Developer for z Systems (IDz AKA RDz) and Micro Focus Enterprise Developer, naming but a few.  Therefore when considering the DevOps framework, these excellent Eclipse-based IDE products provide solutions for the Dev part of the equation; but what about the Ops part?

In a collaborative world, where we all work together, from an Application Lifecycle Management (ALM) perspective, IT Operations are a key part of application delivery and management.  Put simply, once code has been created, it needs to be packaged (E.g. Compile, Link-Edit, et al), tested (E.g. Unit, Integration, System, Acceptance, Regression, et al) and implemented in a Production environment.  We now must consider the very important discipline of Source Code Management (SCM), where from a System z Mainframe perspective, common solutions are CA Endevor SCM, Compuware ISPW, IBM SCLM, Micro Focus ChangeMan ZMF, et al.  Once again, from a DevOps perspective, we somehow have to find a workable and seamless balance that will integrate the dawn of business computing from the 1960’s to these rapidly moving 21st Century requirements.  As previously discussed the Dev part of the DevOps framework is well-covered and straightforward, but perhaps the Ops part requires some more considered thought…

Recently Compuware have acquired ISPW (January 2016) to supplement their Topaz Workbench and Micro Focus acquired ChangeMan ZMF (May 2016) to complement their Micro Focus Enterprise Developer solution.  IBM IDz offers out-of-the-box integration for the IBM Rational Team Concert, CA Endevor SCM and IBM SCLM tools.  Clearly there is a significant difference between Source Code Management (SCM) for Distributed Systems when compared with the System z Mainframe, but today’s 21st century business application will inevitably involve interconnected platforms and so a consistent and seamless SCM process is required for accurate and timely application delivery.  In all likelihood a System z Mainframe user has been using their SCM solution for several decades, evolving processes around this solution, which was never designed for Distributed Systems SCM.  Hence the major System z Application Development ISV’s have acquired SCM products to supplement their core capability, but is it really that simple?  The simple answer is no!

Traditionally, for Application Development activities we deployed the Software Development Life Cycle (SDLC), limited to software development phases, including requirements, design, coding, testing, configuration, project management and change management.  Modern software development processes require real-time collaboration, access to centralized data repository, cross-tool and cross-project visibility, proactive project monitoring and reporting, to rapidly develop and deliver quality software.  This requirement is typically classified as Application Lifecycle Management (ALM).

The first iteration of ALM, namely ALM 1.0 was wholly unsuccessful.  Application Development teams were encouraged to consider the value of point solutions for task management, planning testing, requirements, release management, and other functions.  Therefore ALM 1.0 became just a set of tools, where the all too common question for the Application Development team was “what other tool can we use”!

ALM 2.0 or ALM 2.0+ can be considered as Integrated Application Lifecycle Management or Integrated ALM, where all the tools and their users are synchronized with each other throughout the application development stages.  This integration ensures that every team member knows the Who, What, When, and Why of any changes made during the development process, eradicating arduous, repetitive, manual and error prone activities.  The most important lesson for the DevOps team in a customer environment is to concentrate on the human perspective.  They should ask “how do we want our teams to work together and collaborate” as opposed to asking an Application Development ISV team, “what ALM tools do you have”.  Inevitably the focus will be ISV based, as opposed to customer based.  As per the recent Compuware and Micro Focus SCM acquisition history demonstrates, these tools by definition, were never fully integrated from their original inception…

If the customer DevOps teams collaborate and formulate how they want to work together, an ALM evolution can take place in a timely manner, maintaining investment in previous technologies, as and if required.  Conversely, a revolutionary approach is the most likely outcome for the System z Mainframe user, if looking to the ISV for a “turn-key” ALM solution.  By definition, an end-to-end and turn-key ALM solution from one ISV is not possible and in fact, not desirable!  Put another way, as a System z user, do you really want to write off several decades investment in an SCM solution, for another competitive solution, which will still require many other tools to provide the Integrated ALM capability you require?  As always, balance and compromise is the way forward…

If the ubiquitous System z Application Development ISV were to develop their first software product today, it would inevitably be a DevOps and ALM 2.0+ compatible product, allowing for full integration with all other Application Development tools, whether System z Mainframe or Distributed Systems orientated.  Of course that is not the reality.  It seems somewhat disingenuous that the System z Application Development ISV would ask a potential customer to write-off their several decades investment in a SCM technology, when said ISV has just acquired such a technology!  Once again, this is why the customer based Application Development teams must decide how they want to collaborate and what ALM and DevOps tools they want to use.

Seemingly a pragmatic solution is required, hence the ALM 2.0+ initiative.  If an ISV could develop an all-encompassing DevOps and ALM 2.0+ end-to-end Application Development solution for all IT platforms, they would probably become one of the most popular and biggest ISV’s in a short time period.  However, this still overlooks the existing tools that customer IT organizations have used for many years.  Hence, the pragmatic way forward is to build an open DevOps and ALM 2.0+ solution that will integrate with all other Application Development lifecycle tools, whether market place available, or not!  HPE Application Lifecycle Management (ALM) and Quality Centre (QC) is one such approach for Distributed Systems, but what about the System z Mainframe?

IKAN ALM is an ALM 2.0+ and DevOps architected solution that is vendor and platform agnostic.  Put another way, IKAN ALM can operate in single or multiple-vendor mode.  In all likelihood, single vendor mode is unlikely, as there are many efficient Application Development tools in the marketplace.  However, the single most compelling feature of IKAN ALM is its open framework and interoperability with other ALM technologies.  As previously stated, we might consider source code development as the Dev side of the DevOps framework.  IKAN ALM will interface with these technologies, while its core functionality concentrates on the Ops side of the DevOps framework.  Therefore from an Application Lifecycle Management (ALM) viewpoint, the IKAN ALM solution starts where versioning systems end, with an objective of optimizing the entire software engineering process.

IKAN ALM offers a uniquely integrated web-based Application Lifecycle Management platform for both Agile and traditional software development teams.  It combines Continuous Integration and Lifecycle Management, offering a single point of control, delivering support for build and deploy processes, approval processes, release management and software lifecycles.  From a pragmatic and common-sense viewpoint, typically organizations want to continue working with their preferred tools in their preferred environment.  Being ALM 2.0+ compliant, IKAN ALM fully integrates with any versioning tool and any issue tracking tool, providing ALM reports across repositories.  Therefore IKAN ALM offers an evolutionary approach, allowing an organization to leverage from timely ALM benefits, without risk and without the need for displacing any existing technologies.  Over time, should the organization wish to displace older legacy ALM software products, they could so, leveraging from the stand-alone or multiple vendor flexibility of the IKAN ALM solution.

IKAN ALM incorporates ready to use solutions and processes for multiple environments.  These solutions include ALM 2.0+ compliant processes and the necessary scripts to automate the integration with a specific environment, including but not limited to CA Endevor (SCM), CollabNet, HPE ALM/Quality Centre (QC), Oracle Warehouse Builder (OWB), SAP, et al.

The IKAN ALM central server is an open framework web application responsible for User Authentication and Authorization, User Interface Processing, Distributed Version Repository Management and Scheduling Code Builds.  The IKAN ALM agents perform the application build and install functions.

The data repository is an open central database where all administrative data and the audit trail history are stored.  IKAN ALM communicates with the repository using standard JDBC interfaces.  The required JDBC drivers are installed along with the product.  The repository can reside in any RDBMS system, including IBM DB2/UDB, Informix, Microsoft SQL Server, MySQL, Oracle, et al.

Source code is always stored in a Version Control Repository.  IKAN ALM integrates with all the typical versioning systems including Apache Subversion, CVS, Git, Microsoft Visual SourceSafe (VSS), IBM Rational ClearCase (UCM & LT), Serena PVCS Version Manager, et al.  The choice of IDE often drives the choice of the Version Control System (VCS), where organizations can have more than one operational VCS.  IKAN ALM is a solution that provides a unique process control over all versioning systems present in the organization.  IKAN ALM stores each build result within its central server filesystem, labelling the source accordingly in the associated versioning system, guaranteeing a correct source-build relationship.

IKAN ALM safeguards Authentication & Authorization interacting with the organizations security deployment (E.g. Active Directory, LDAP, Kerberos, et al) via the Java Authentication and Authorization Service (JAAS) interface.

IKAN ALM audits any changes (E.g. Who, What, Why, When, Approver, et al), orchestrating the various components and phases of Application Lifecycle Management, delivering an automated workflow to drive a continuous flow of activity throughout the development lifecycle, efficiently coordinating and optimizing application development changes.

In an environment with ever increasing mandatory regulatory compliance requirements, IKAN ALM simplifies the processes for delivering such compliance.  As per the IKAN ALM Build, Deploy, Lifecycle and Approval Management framework, compliance for industry standard regulations (E.g. CMM, ITIL, Sarbanes-Oxley, Six Sigma, et al) is delivered via a reliable, repeatable and auditable process throughout the development life cycle.

Clearly any IT organization can benefit from a fully integrated ALM 2.0+ solution, by enforcing and safeguarding the ALM process is repeatable, reliable and documented.  Regardless of the development team headcount size, ALM releases key people from repetitive and less interesting tasks, allowing them to focus on delivering today’s Analytics based, Cloud, Mobile and Social applications.  A fully integrated ALM 2.0+ solution such as IKAN ALM allows for simplified legacy environment modernization, while simultaneously allowing for experimentation and improvement of all environments alike, both legacy and new.

In conclusion, savvy organizations will safeguard that their Application Development and Operations teams collaborate as per the DevOps framework and decide how they want to implement processes for their environment and more importantly, their business.  This focus should avoid any notion of asking the ubiquitous Application Development ISV, “which tools we should use”!  Similarly, recognizing the integration foundation of ALM 2.0+ will eliminate any notion to displace existing technologies and processes, unless absolutely required.  The need for agile, rapid and quality source code development and delivery is obvious, as is the related solution, which is inevitably pragmatic, evolutionary and multiple vendor tool based.

Optimize Your System z ROI with z Operational Insights (zOI)

Social Media Sharing

Hopefully all System z users are aware of the Monthly Licence Charge (MLC) pricing mechanisms, where a recurring charge applies each month.  This charge includes product usage rights and IBM product support.  If only it was that simple!  We then encounter the “Alphabet Soup” of acronyms, related to the various and arguably too numerous MLC pricing mechanism options.  Some might say that 13 is an unlucky number and in this case, a System z pricing specialist would need to know and understand each of the 13 pricing mechanisms in depth, safeguarding the lowest software pricing for their organization!  Perhaps we could apply the unlucky word to such a resource.  In alphabetical order, the 13 MLC pricing options are AWLC, AEWLC, CMLC, EWLC, MWLC, MzNALC, PSLC, SALC, S/390 Usage Pricing, ULC, WLC, zELC and zNALC!  These mechanisms are commercial considerations, but what about the technical perspective?

Of course, System z Mainframe CPU resource usage is measured in MSU metrics, where the usage of Sub-Capacity allows System z Mainframe users to submit SCRT reports, incorporating Monthly License Charges (MLC) and IPLA software maintenance, namely Subscription and Support (S&S).  We then must consider the Rolling 4-Hour Average (R4HA) and how best to optimize MSU accordingly.  At this juncture, we then need to consider how we measure the R4HA itself, in terms of performance tuning, so we can minimize the R4HA MSU usage, to optimize cost, without impacting Production if not overall system performance.

Finally, we then have to consider that WLC has a ~17-year longevity, having been announced in October 2000 and in that time IBM have also introduced hardware features to assist in MSU optimization.  These hardware features include zIIP, zAAP, IFL, while there are other influencing factors, such as HyperDispatch, WLM, Relative Nest Intensity (RNI), naming but a few!  The Alphabet Soup continues…

In summary, since the introduction of WLC in Q4 2000, the challenge for the System z user is significant.  They must collect the requisite instrumentation data, perform predictive modelling and fully comprehend the impact of the current 13 MLC pricing mechanisms and their interaction with the ever-evolving System z CPU chip!  In the absence of such a simple to use reporting capability from IBM, there are a plethora of 3rd party ISV solutions, which generally are overly complex and require numerous products, more often than not, from several ISV’s.  These software solutions process the instrumentation data, generating the requisite metrics that allows an informed decision making process.

Bottom Line: This is way too complex and are there any Green Shoots of an alternative option?  Are there any easy-to-use data analytics based options for reducing MSU usage and optimizing CPU resources, which can then be incorporated into any WLC/MLC pricing considerations?

In February 2016 IBM launched their z Operational Insights (zOI) offering, as a new open beta cloud-based service that analyses your System z monitoring data.  The zOI objective is to simplify the identification of System z inefficiencies, while identifying savings options with associated implementation recommendations. At this juncture, zOI still has a free edition available, but as of September 2016, it also has a full paid version with additional functionality.

Currently zOI is limited to the CICS subsystem, incorporating the following functions:

  • CICS Abend Analysis Report: Highlights the top 10 types of abend and the top 10 most abend transactions for your CICS workload from a frequency viewpoint. The resulting output classifies which CICS transactions might abend and as a consequence, waste processor time.  Of course, the System z Mainframe user will have to fix the underlying reason for the CICS abend!
  • CICS Java Offload Report: Highlights any transaction processing workload eligible for IBM z Systems Integrated Information Processor (zIIP) offload. The resulting output delivers three categories for consideration.  #1; % of existing workload that is eligible for offload, but ran on a General Purpose CP.  #2; % of workload being offloaded to zIIP.  #3; % of workload that cannot be transferred to a zIIP.
  • CICS Threadsafe Report: Highlights threadsafe eligible CICS transactions, calculating the switch count from the CICS Quasi Reentrant Task Control Block (QR TCB) per transaction and associated CPU cost. The resulting output identifies potential CPU savings by making programs threadsafe, with the associated CICS subsystem changes.
  • CICS Region CPU Constraint: Highlights CPU constrained regions. CPU constrained CICS regions have reduced performance, lower throughput and slower transaction response, impacting business performance (I.E. SLA, KPI).  From a high-level viewpoint, the resulting output classifies CICS Region performance to identify whether they’re LPAR or QR constrained, while suggesting possible remedial actions.

Clearly the potential of zOI is encouraging, being an easy-to-use solution that analyses instrumentation data, classifies the best options from a quick win basis, while providing recommendations for implementation.  Having been a recent user of this new technology myself, I would encourage each and every System z Mainframe user to try this no risk IBM z Operational Insights (zOI) software offering.

The evolution for all System z performance analysis software solutions is to build on the comprehensive analysis solutions that have evolved in the last ~20+ years, while incorporating intelligent analytics, to classify data in terms of “Biggest Impact”, identifying “Potential Savings”, evolving MIPS measurement, to BIPS (Biggest Impact Potential Savings) improvements!

IBM have also introduced a framework of IT Operations Analytics Solutions for z Systems.  This suite of interconnected products includes zOI, IBM Operations Analytics for z Systems, IBM Common Data Provider for z/OS and IBM Advanced Workload Analysis Reporter (IBM zAware).  Of course, if we lived in a perfect world, without a ~20 year MLC and WLC longevity, this might be the foundation for all of our System z CPU resource usage analysis.  Clearly this is not the case for the majority of System z Mainframe customers, but zOI does offer something different, with zero impact, both from a system impact and existing software interoperability viewpoint.

Bottom Line: Optimize Your System z ROI via zOI, Evolving From MIPS Measurement to BIPS Improvements!

DB2: Internal Subsystem Security vs. External Security Manager (ESM)?

Social Media Sharing

With the ever increasing requirement for regulatory compliance and the clear and present danger associated with cybersecurity attacks, isn’t now the best time to safeguard your organizations most important asset, namely business data?  Various industry analyst quotes state that ~80%+ of Mainframe data resides in databases and associated data sources and ~80%+ of global corporate data originates or resides on IBM Mainframes.  Depending on your viewpoint, rightly or wrongly DB2 is the most pervasive of database subsystems, offering two mechanisms for security, either internal subsystem or External Security Manager (ESM) based via ACF2, RACF or Top Secret.  When DB2 was first released in 1983, Mainframe security was in its infancy and perhaps even an afterthought, and so implementing internal DB2 security might have been the typical approach for many years.  Some several decades later, asking that age old rhetorical question; what is the best security solution for my mission critical and priceless data?  I’m not sure it is a rhetorical question, the answer is patently obvious, external security!

RACF and DB2 security integration was introduced in 1997 with OS/390 2.4 and DB2 Version 6 and so a ~14 year period where DB2 internal security was the only option!  Personally, ~20 years ago I was involved with an internal DB2 to RACF security migration project, part of a larger Operating System, DB2 and indeed CICS upgrade.  Basically the DB2 DBA team stated “we would have never implemented internal DB2 security if the RACF option was available; can you migrate to RACF for us”?  The simple reality being that Security Management is not a core DBA skill and such a process is ideally delivered by a Subject Matter Expert (SME).  Of course, DB2 was somewhat straightforward ~20 years ago, as were its security features, but in the last ~20 years, DB2 has become more complex and enterprise wide, while I’m often surprised by the number of organizations I encounter, both small and large, still deploying internal DB2 security…

Recognizing a ~20 year longevity period of RACF security support for DB2, maybe even the most conservative of organizations might concede that the technology is proven and works?  From a business viewpoint, such a migration from DB2 internal to an External Security Manager (ESM) is the proverbial “no brainer”, because:

  • Subject Matter Expert (SME): Clearly all IBM Mainframe organizations now have dedicated security professionals who are ideally placed to implement enterprise wide security policies. A DB2 DBA is a highly skilled SME in their own discipline, most likely welcoming the migration of security from DB2 to ACF2, RACF or Top Secret.
  • Compliance: A plethora of industry regulations, including but not limited to GLB, SOX, PCI-DSS, et al, dictate that a hybrid of technical skills and business policy knowledge is required. This has generated a requirement for the executive level CISO role and associated security certifications (E.g. CISA, CISM, CISSP) for SME resources.
  • Auditability: From a board level CxO viewpoint, which technical resource would you want responsible for your security policy, the CISO/CIO and their security engineers or a DB2 DBA?
  • Hacking-Penetration Testing: Rightly or wrongly, rightly in my opinion, Penetration (Pen) Testing is a methodology to try and hack a system in order to highlight security vulnerabilities, supplementing the traditional periodic audit processes. Once again, high levels of security expertise are required for such activities.

From a technical viewpoint, what is the complexity of performing a DB2 internal to RACF external security migration?

From a DB2 viewpoint, internal security rules are stored in DB2 catalog tables with the SYSIBM.SYSxxxAUTH naming convention.  Therefore these data repositories can be processed with a simplistic DB2 to RACF security migration tool (RACFDB2).  As per any migration activity, Garbage In, Garbage Out (GIGO) applies, and this golden rule dictates the requirement for a collaborative team effort to execute a DB2 to RACF security migration process.  Of course, the most important resources will be the DB2 DBA(s) responsible for maintaining DB2 security and a RACF SME.  Between them, these 2 resources have all of the skills required to perform this migration process, if not the experience.

From a documentation viewpoint, there are several resources that can be referenced to simplify this process:

The purpose of this blog post is a “call to action”, for those sites still deploying DB2 internal security, to migrate to their External Security Manager (ESM), whether ACF2, RACF or Top Secret.  There are also options for the migration of internal DB2 security to CA ACF2 and Top Secret respectively.

As previously stated, the DB2 DBA will be ideally placed to review the existing internal DB2 security environment, performing any clean-up and rationalization before the actual migration process.  The initial pass of the migration process will inevitably produce a one:one (1:1) mapping of rules, generating numerous security definitions extraneous to requirements.  This is where the ACF2, RACF or Top Secret SME can collaborate with their DB2 DBA, applying grouping, masking and generic filters to vastly reduce and simplify the number of security definitions required.  As with any migration, perform on the lowest level non-Production environment first, apply the lessons learned, and use common sense, issuing warning messages for inadvertent security policy errors, as opposed to denying security access for Production migrations!  Therefore allowing for the smooth transition from DB2 internal to ESM based security.

In my opinion, each and every IBM Mainframe organization has the ability to initiate this DB2 internal to external ACF2, RACF or Top Secret migration project.  Leveraging from 3rd party organizations also makes sense and in no particular order, other than alphabetical, I would suggest IBM Global Services, millennia, RSM Partners or Vanguard.

In conclusion, the IBM System z External Security Manager (ESM), whether ACF2, RACF or Top Secret is an ever-evolving solution with highly advanced security functionality and the de facto central repository for IBM Mainframe security policy management.  From a Security Information & Event Management (SIEM) integration viewpoint, any IBM Mainframe security policy violations will be reported upon in near real-time, while being managed by IBM Mainframe security experts.  Without doubt, if DB2 was implemented before 1997, internal security would have applied, but there has been a ~20 year period where migration to the ACF2, RACF or Top Secret ESM could have happened.  If such a migration activity applies to your organization, I would hope it’s a high priority item, given the potential security risk and priceless value of your business data!

System z: I/O Interoperability Evolution – From Bus & Tag to FICON

Social Media Sharing

Since the introduction of the S/360 Mainframe in 1964 there has been a gradual evolution of I/O connectivity that has taken us from copper Bus & Tag to fibre ESCON and now FICON channels.  Obviously during this ~50 year period there have been exponentially more releases of Mainframe server and indeed Operating System.  In this timeframe there have been 2 significant I/O technology milestones.  Firstly, in 1990, ESCON was part of the significant S/390 announcement (MVS/ESA), where migration to ESCON was a great benefit, if only for replacing the heavy and big copper Bus & Tag channels.  Secondly, even though FICON was released in the late 1990’s, in 2009 IBM announced that the z10 would be the last Mainframe server to support greater than 240 native ESCON channels.  Similarly IBM declared that the last zEnterprise server to support ESCON channels are the z196 and z114 servers.  Each of these major I/O evolutions required a migration philosophy and not every I/O device would be upgraded to support either native ESCON of FICON channels.  How did customers achieve these mandatory I/O upgrades to safeguard IBM Mainframe Server and associated Operating System longevity?

In 2009 it was estimated ~20% of all Mainframe customers were using ESCON only I/O infrastructures, while only ~20% of all Mainframe customers were deploying a FICON only infrastructure.  Similarly ~33% of z9 and z10 systems were shipped with ESCON CVC (Block Multiplexor) and CBY (Byte Multiplexor) channels defined, while ~75% of all Mainframe Servers had native ESCON (CNC) capability.  From a dispassionate viewpoint, clearly the migration from ESCON to FICON was going to be a significant challenge, while even in this timeframe, there was still use of Bus & Tag channels…

One of the major strengths of the IBM Mainframe ecosystem is the partner network, primarily software (ISV) based, but with some significant hardware (IHV) providers.  From a channel switch viewpoint, we will all be familiar with Brocade, Cisco and McData, where Brocade acquired McData in 2006.  However, from a channel protocol conversion viewpoint, IBM worked with Optica Technologies, to deliver a solution that would allow the support for ESCON and Bus & Tag channels to the FICON only zBC12/zEC12 and future Mainframe servers (I.E. z13, z13s).  Somewhat analogous to the smartphone where the user doesn’t necessarily know that an ARM processor might be delivering CPU power to their phone, sometimes even seasoned Mainframe professionals might inadvertently overlook that the Optica Technologies Prizm solution has been or indeed is still deployed in their System z Data Centre…

When IBM work with a partner from an I/O connectivity viewpoint, clearly IBM have to safeguard that said connectivity has the highest interoperability capability with bulletproof data exchange attributes.  Sometimes we might take this for granted with the ubiquitous disk and tape subsystem suppliers (I.E. EMC, HDS, IBM, Oracle), but for FICON conversion support, Optica Technologies was a collaborative partner for IBM.  Ultimately the IBM Hardware Systems Assurance labs deploy their proprietary System Assurance Kernel (SAK) processes to safeguard I/O subsystem interoperability for their System z Mainframe servers.  Asking that rhetorical question; when was the last time you asked your IHV for site of their System Assurance Kernel (SAK) exit report from their collaboration with IBM Hardware Systems Assurance labs for their I/O subsystem you’re considering or deploying?  In conclusion, the SAK compliant, elegant, simple and competitively priced Prizm solution allowed the migration of tens if not hundreds of thousands of ESCON connections in thousands of Mainframe data centres globally!

With such a rich heritage of providing a valuable solution to the global IBM Mainframe install base, whether the smallest or largest, what would be next for Optica Technologies?  Obviously leveraging from their expertise in FICON channel support would be a good way forward.  With the recent acquisition of Bus-Tech by EMC and the eradication of the flexible MDL tapeless virtual tape offering, Optica Technologies are ideally placed to be that small, passionate and eminently qualified IHV to deliver a turnkey virtual tape solution for the smaller and indeed larger System z user.  The Optica Technologies zVT family leverages from the robust and heritage class Prizm technology, delivering an innovative family of virtual tape solutions.  The entry “Virtual Tape In A Box” zVT 3000i provides 2 FICON channel interfaces and 4 TB uncompressed internal RAID-5 disk space, seamlessly interfacing with all System z supported tape devices (I.E. 3490, 3590) and processes.  A single enterprise class zVT 5000-iNAS node delivers 2 FICON channel interfaces, NFS storage capacity from 8TB to 1PB in a single frame with standard deduplication, compression, replication and encryption features.  The zVT 5000-iNAS is available with multi-node configuration support for additional scalability and resiliency.  For those customers wishing to deploy their own choice of NFS or FC storage subsystem, the zVT 5000-FLEX allows such connectivity accordingly.

In conclusion, sometimes it’s all too easy to take some solutions for granted, when they actually delivered a tangible and arguably priceless solution in the evolution of your organizations System z Mainframe server journey from ESCON, if not Bus & Tag to FICON.  Perhaps the Prizm solution is one of these unsung products?  Therefore, the next time you’re reviewing the virtual tape market place, why wouldn’t you seriously consider Optica Technologies, given their rich heritage in FICON channel interoperability?  Given that IBM chose Optica Technologies as their strategic partner for ESCON to FICON migration, seemingly even IBM might have thought “nobody gets fired for choosing…”!