System z: I/O Interoperability Evolution – From Bus & Tag to FICON

Since the introduction of the S/360 Mainframe in 1964 there has been a gradual evolution of I/O connectivity that has taken us from copper Bus & Tag to fibre ESCON and now FICON channels.  Obviously during this ~50 year period there have been exponentially more releases of Mainframe server and indeed Operating System.  In this timeframe there have been 2 significant I/O technology milestones.  Firstly, in 1990, ESCON was part of the significant S/390 announcement (MVS/ESA), where migration to ESCON was a great benefit, if only for replacing the heavy and big copper Bus & Tag channels.  Secondly, even though FICON was released in the late 1990’s, in 2009 IBM announced that the z10 would be the last Mainframe server to support greater than 240 native ESCON channels.  Similarly IBM declared that the last zEnterprise server to support ESCON channels are the z196 and z114 servers.  Each of these major I/O evolutions required a migration philosophy and not every I/O device would be upgraded to support either native ESCON of FICON channels.  How did customers achieve these mandatory I/O upgrades to safeguard IBM Mainframe Server and associated Operating System longevity?

In 2009 it was estimated ~20% of all Mainframe customers were using ESCON only I/O infrastructures, while only ~20% of all Mainframe customers were deploying a FICON only infrastructure.  Similarly ~33% of z9 and z10 systems were shipped with ESCON CVC (Block Multiplexor) and CBY (Byte Multiplexor) channels defined, while ~75% of all Mainframe Servers had native ESCON (CNC) capability.  From a dispassionate viewpoint, clearly the migration from ESCON to FICON was going to be a significant challenge, while even in this timeframe, there was still use of Bus & Tag channels…

One of the major strengths of the IBM Mainframe ecosystem is the partner network, primarily software (ISV) based, but with some significant hardware (IHV) providers.  From a channel switch viewpoint, we will all be familiar with Brocade, Cisco and McData, where Brocade acquired McData in 2006.  However, from a channel protocol conversion viewpoint, IBM worked with Optica Technologies, to deliver a solution that would allow the support for ESCON and Bus & Tag channels to the FICON only zBC12/zEC12 and future Mainframe servers (I.E. z13, z13s).  Somewhat analogous to the smartphone where the user doesn’t necessarily know that an ARM processor might be delivering CPU power to their phone, sometimes even seasoned Mainframe professionals might inadvertently overlook that the Optica Technologies Prizm solution has been or indeed is still deployed in their System z Data Centre…

When IBM work with a partner from an I/O connectivity viewpoint, clearly IBM have to safeguard that said connectivity has the highest interoperability capability with bulletproof data exchange attributes.  Sometimes we might take this for granted with the ubiquitous disk and tape subsystem suppliers (I.E. EMC, HDS, IBM, Oracle), but for FICON conversion support, Optica Technologies was a collaborative partner for IBM.  Ultimately the IBM Hardware Systems Assurance labs deploy their proprietary System Assurance Kernel (SAK) processes to safeguard I/O subsystem interoperability for their System z Mainframe servers.  Asking that rhetorical question; when was the last time you asked your IHV for site of their System Assurance Kernel (SAK) exit report from their collaboration with IBM Hardware Systems Assurance labs for their I/O subsystem you’re considering or deploying?  In conclusion, the SAK compliant, elegant, simple and competitively priced Prizm solution allowed the migration of tens if not hundreds of thousands of ESCON connections in thousands of Mainframe data centres globally!

With such a rich heritage of providing a valuable solution to the global IBM Mainframe install base, whether the smallest or largest, what would be next for Optica Technologies?  Obviously leveraging from their expertise in FICON channel support would be a good way forward.  With the recent acquisition of Bus-Tech by EMC and the eradication of the flexible MDL tapeless virtual tape offering, Optica Technologies are ideally placed to be that small, passionate and eminently qualified IHV to deliver a turnkey virtual tape solution for the smaller and indeed larger System z user.  The Optica Technologies zVT family leverages from the robust and heritage class Prizm technology, delivering an innovative family of virtual tape solutions.  The entry “Virtual Tape In A Box” zVT 3000i provides 2 FICON channel interfaces and 4 TB uncompressed internal RAID-5 disk space, seamlessly interfacing with all System z supported tape devices (I.E. 3490, 3590) and processes.  A single enterprise class zVT 5000-iNAS node delivers 2 FICON channel interfaces, NFS storage capacity from 8TB to 1PB in a single frame with standard deduplication, compression, replication and encryption features.  The zVT 5000-iNAS is available with multi-node configuration support for additional scalability and resiliency.  For those customers wishing to deploy their own choice of NFS or FC storage subsystem, the zVT 5000-FLEX allows such connectivity accordingly.

In conclusion, sometimes it’s all too easy to take some solutions for granted, when they actually delivered a tangible and arguably priceless solution in the evolution of your organizations System z Mainframe server journey from ESCON, if not Bus & Tag to FICON.  Perhaps the Prizm solution is one of these unsung products?  Therefore, the next time you’re reviewing the virtual tape market place, why wouldn’t you seriously consider Optica Technologies, given their rich heritage in FICON channel interoperability?  Given that IBM chose Optica Technologies as their strategic partner for ESCON to FICON migration, seemingly even IBM might have thought “nobody gets fired for choosing…”!

zHyperLink: Just Another System z DASD I/O Function Enhancement?

Over the last several decades or so the IBM Mainframe platform has delivered several new technologies that have dramatically improved the performance of disk (DASD) I/O performance.  Specifically the deployment of ESCON as the introduction to Fibre Optical channels, followed by EMIF for channel sharing and reduced I/O protocol, superseded by FICON and most recently zHPF.  All of these technologies have allowed for ever larger amounts of data to be processed by the System z server and the adoption of Geographically Dispersed Parallel Sysplex (GDPS) implementations for business continuity reasons.  Ultimately mission critical data and decisions are facilitated by applications and sub-second response times for these transactions is expected.  Some might say that we’re always running to stand still from a performance perspective when implementing the latest System z technologies?

In reality, today’s 21st Century mission-critical application is not just capturing and storing customer data, it’s doing so much more, attempting to make informed business decisions for a richer customer experience!  Historically a customer transaction would be on a one-to-one basis (E.g. ask for a balance query), whereas today, said transaction might generate more data for the customer, potentially offering them a new or enhanced product.  In theory, this informed and intelligent transaction processing delivers a richer experience for the customer and potentially new revenue opportunities for the business.

For several years IBM have integrated the Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative into their product offerings, recognising that a business transaction can originate from the cloud or a mobile device, potentially via a Social Media platform, require rich processing via real-time analytics, while requiring the highest levels of security.  Of course, one must draw one’s own conclusions, but maintaining sub-second or ultra-fast transaction response times, with this level of CAMSS complexity requires significant performance enhancements.  To deliver such ultra-fast response times requires the DASD I/O subsystem to maintain the highest levels of performance, aligned with the latest System z server platform…

In January 2017 IBM issued a Statement of Direction (SoD) and associated FAQ for their zHyperLink technology.  zHyperLink is a new short distance mainframe link technology designed for up to 10 times lower latency than zHPF.  zHyperLink is intended to accelerate DB2 for z/OS transaction processing and improve active log throughput.  IBM intends to deliver field upgradable support for zHyperLink on the existing IBM DS8880 storage subsystem.  zHyperLink technology is a new mainframe attach link.  It is the result of collaboration between DB2 for z/OS, the z/OS operating System, IBM System z servers and the DS8880 storage subsystem to deliver extreme low latency I/O access for DB2 for z/OS applications.  zHyperLink technology is intended to complement FICON technology, accelerating those I/O requests that are typically used for transaction processing.  These links are point-to-point connections between the System z CEC and the storage system and are limited to 150 meter distances.  These links do not impact the z Architecture 8 channel path limit.

From a DB2 I/O service performance perspective viewpoint, at short distances, a native FICON or zHPF originated I/O typically requires 300 Microseconds (μs) for a simple I/O operation.  The coupling facility for z Systems typically can read or write 4K of data in in under 8 Microseconds.  zHyperLink technology will provide a new short distance link from the mainframe to storage to read and write data up to 10 times faster than FICON or zHPF; reducing DB2 I/O service times to an anticipated 20-30 Microseconds.

In conclusion, with a promise of 10 times faster processing, as per its fibre optic channel technology predecessors, particularly EMIF and zHPF, zHyperLink is a revolutionary DASD I/O function and not just another DASD I/O subsystem function enhancement.  At this stage, the deployment of zHyperLink functionality is restricted to DB2 and the IBM DS8880 storage subsystem, while we eagerly await compatibility support from EMC and HDS accordingly.  Moreover, as per the evolution of zHPF, we hope for the inclusion of other I/O workloads to benefit from this paradigm changing I/O response time technology.

Finally, as always, the realm of possibility always exists for each and every System z DASD I/O subsystem to be monitored and tuned on a proactive and 24*7*365 basis.  Although all of this DASD I/O performance data has always been and still is captured by RMF (CMF) data, intelligent processing of this data requires an ever evolving Performance Management process and arguably an intelligent software solution (E.g. IntelliMagic Direction Disk Magic or Technical Storage Easy Analyze Disk Mainframe) to provide meaningful information and business decisions from ever increasing amounts of RMF (CMF) data.  In November 2016 ago I delivered the DASD I/O Performance Management Is Easy? session at the UK GSE Annual 2016 meeting accordingly…

System z: Optimizing DASD I/O Subsystem Performance

Historically there was a very simple synergy between the IBM S/370 Mainframe and its supporting disk I/O (DASD) subsystem, allowing for Mainframe host to physical and logical disk device (I.E. 3390) connectivity. The analysis and tuning of this I/O subsystem has always been and continues to be supported by the SMF Type 7n records via IBM RMF and the BMC CMF alternative. However, over the years, major advances in DASD subsystems and the System z Mainframe server have delivered many layers of technology resources (E.g. Cache, Memory, FICON Channels, RAID Storage, Proprietary Microcode, et al) and this has introduced complexities into highlighting DASD I/O subsystem performance problems.

The focus of technology based metrics (E.g. I/O Rate Response Time, I/O MB/S Bandwidth, et al) have also been complemented with more meaningful business focussed Service Level Agreements (SLA). Therefore today’s System z I/O Performance Analyst must gather and act upon proactive meaningful information from the ever-increasing amounts of performance data available. Put another way, too much data can deliver not enough information! As previously stated, it was forever thus, RMF and CMF have always collected the requisite performance data available and arguably no other data source is required (E.g. OMEGAMON/TMON/SYSVIEW Performance Monitor, SAS/MXG/MICS/WPS Performance Database). RMF/CMF is the ideal data source for thorough and timely System z I/O performance management, where intelligent analytics and expert knowledge are required to present this “Golden Record”.

However, today’s System z Support Teams need simple and timely presentation of the data, highlighting potential challenges, graphically presented for their Management, allowing for simple tracking of SLA agreements and technology changes (I.E. Software/Hardware Upgrades).

Additionally, Workload Manager (WLM) can control non-paging queued DASD I/O requests, based upon device busy conditional processing. Therefore the z/OS system can manage I/O priorities in a Sysplex, based on WLM service class goals. WLM dynamically adjusts the I/O priority based on service class goal performance and whether a DASD device can influence the overall performance objectives. For obvious reasons, this WLM function does not micro-manage I/O priorities, only changing a service class period’s I/O priority infrequently. WLM is deployed by many System z users to assist in the automated management of system resources (E.g. CPU, Memory, I/O, et al), based upon Service Level goals.

From a DASD subsystem technology viewpoint, there is no longer an obvious one-one direct connection between the Mainframe host and DASD device. An increasing number of technological advances, both microcode and hardware (E.g. Memory, Fibre Channel, Function Assist Processing, et al) have diminished the requirement for data access directly from the physical device. Put another way, in today’s world of System z servers with multiple cache level CPU chips (I.E. Relative Nest Intensity), massive and multiple processor memory resources (I.E. z13 @ 10 TB Memory), high bandwidth Fibre Channel (I.E. FICON, zHPF) subsystem and a hierarchy of DASD memory (I.E. SSD/Flash, Cache), it’s not uncommon to consider an I/O that requires physical device access as a problem! Finally and most importantly, from a DASD subsystem viewpoint, each of the recognized System z DASD providers, EMC (Symmetrix VMAX), HDS (VSP G1000) and IBM (DS8870) have highly proprietary DASD subsystems that provide z/OS plug compatibility, but deliver overall I/O performance using their own unique architecture and internal algorithms.

Of course, an over configured hardware environment will deliver a poor TCO, while an under configured environment will manifest in SLA issues and bad user experiences, where the middle-ground always delivers the optimal environment. Resource optimization always demands proactive day-to-day management, from an internal and indeed external communication viewpoint. With the highly proprietary design features of the IHV DASD subsystems, whether EMC, HDS or IBM, having the right information and identifying the precise problem, simplifies the communication process with the IHV. Such communication might highlight a resource under provision (E.g. Memory Capacity), a subsystem setting tweak requirement, either host or subsystem based, or indeed a hardware failure. In today’s world, these issues need to be fixed in minutes or hours, not days or weeks.

Therefore, where does today’s System z I/O Performance Analyst start to collect the required information to safeguard that their DASD subsystem is optimized, both from a capacity and performance viewpoint?

A simplistic viewpoint of an I/O health-check should consider the following:

  • Service Level Agreements (SLA): Are overall objectives being delivered or missed?
  • User Experience: Are users (customers) complaining of poor service or response times?
  • I/O Metric Performance: Are there obvious signs of abnormal performance statistics?

Several decades ago, an overall I/O health check might have been a periodic (E.g. Weekly or longer) activity, whereas today it’s undoubtedly a Business As Usual (BAU) and 24*7 activity. Therefore a fully automated solution is required, built upon the tried and tested System z performance fundamentals, namely RMF or CMF. The ideal solution will perform analytics based data reduction, presenting the right information, at the right time, allowing for intelligent business based communication, both internally, to customers and end users from an SLA viewpoint, and externally, with IHV DASD suppliers, safeguarding optimal performance and TCO.

EADM (Easy Analyze DASD Mainframe) is a solution from Technical Storage that performs automated performance analysis of the z/OS I/O subsystem, delivering predictive analytics for better storage capacity planning and performance measurement. The Technical Storage EADM architects have in excess of 40 years IBM Mainframe experience, specializing in the I/O subsystem, and so it’s no surprise that EADM delivers expert and timely knowledge via an easy-to-use solution.

EADM is an easy-to-install and easy-to-use plug-and-play solution that has no proprietary considerations, requiring no additional System z resource (E.g. CPU, Memory, DASD, et al) requirements. Installed on Microsoft server platforms, EADM is easily virtualized via VMware, Hyper-V, et al, requiring no target database for performance data storage. EADM performs a daily health check of the entire System z disk subsystem. EADM works around the clock, delivering customized and automatic user friendly GUI type reports. For today’s System z technician, the open and IP architecture base of EADM allows for secure remote access via Mobile, Tablet or Laptop devices, as and when required.

Operations and performance teams are alerted as soon as performance variances occur, typically in minutes, assisting in the identification of underlying root problems, causing changes in system behaviour. Incorporating intelligent and meaningful I/O performance indicators, with drill-down and zoom-in ability, storage technicians can determine if the problem is temporary, permanent, local or global. By simplifying the data reduction process (E.g. RMF/CMF data from numerous LPAR/Sysplex environments), EADM safeguards that the internal technical team can efficiently manage their ever increasingly complex and large DASD environment, for intelligent and timely communications with internal business teams and external suppliers alike.

EADM simplifies the System z I/O subsystem capacity and performance management process, delivering expert reports and timely historical analysis, for example:

  • Automatic daily (24 Hour) analysis of Sysplex wide workload (On-Line TP & Batch) I/O response times
  • Systematic intelligent alerts of early performance variances with exact occurrence time indicators
  • Identification of I/O performance hot-spots with DASD volume and data set level granularity
  • Performance trending at DFSMS Storage Group, Subsystem LCU and DASD volume level
  • DR (E.g. PPRC) simulations to prevent data loss and forecast Data Centre failover scenarios
  • I/O subsystem WLM indicators to determine exactly what impacts performance objectives
  • Full FICON channels and zHPF analysis, incorporating typical I/O throughput indicators
  • HyperPAV and associated LCU indicators to easily balance volumes, optimizing PAV alias allocation
  • Performance monitoring and balancing via intelligent LCU, SSID and I/O analytics
  • DASD capacity usage via DCOLLECT data, comparing assigned vs. allocated vs. actual disk utilization
  • EADM supports entry-level several LPAR and complex multiple CPC/LPAR System z configurations

A well provisioned and performing System z I/O subsystem is of vital importance for safeguarding today’s ever increasing storage requirements of mission critical business applications. A poorly performing I/O subsystem will generate unnecessary and extra CPU overhead, with potential and tangible TCO impact, in conjunction with potential business impact. Although the advances of the System z server and underlying DASD I/O subsystem can compensate for many application code or data placement issues, the fundamental concepts of analysing and tuning the I/O subsystem remain.

Therefore the savvy and proactive System z customer will safeguard that they find a solution to deliver optimal DASD I/O performance. Without doubt, such an analysis could be performed by a highly-skilled individual, but today’s 21st Century world demands a hybrid of technical and commercial skills. Therefore a solution that incorporates the diagnostic knowledge of the most highly trained technician, performs intelligent analytics on a plethora of Sysplex wide performance data sources and presents the information required, is one that will deliver benefit each and every day. EADM is an example of such a solution, delivering demonstrable System z TCO optimization benefits, while safeguarding a short-term ROI, with simple deployment and resource utilization attributes.

FICON (Fibre Connection Channel): 15 Years of Mainframe I/O Improvements

In 1998, IBM introduced FICON channels for enhanced I/O connectivity and performance for their 9672 G5 processors, delivering significant capability when compared to its predecessor, ESCON.  Let’s not forget that ESCON (Enterprise Systems – S/390) was the first iteration of Fibre Channel for the IBM Mainframe, delivering significant capability, when compared with the previous technology of heavy, large and costly copper based bus & tag parallel (S/370) channels.

ESCON channels were first introduced in the early 1990’s, but after less than a decade, the data and associated storage device explosion was exposing the technical capabilities of ESCON, for example:

  • Mainframe Server Channel Support: One IBM Mainframe processor could only support 256 ESCON channels, whereas FICON was offering a ~5-8:1 reduction in channel requirements.  Put another way, a customer could expect to consolidate the number of channels required from ~200 ESCON to ~30-40 FICON.
  • Device Support: One ESCON channel could support up to 1024 devices (sub-channel/device numbers), channel, whereas a 9672 FICON channel increased support by 16 fold, up to 16,384 (16 K) devices.
  • Distance: The performance of ESCON dropped off significantly when the distance between the channel and associated Control Unit was greater than ~9 KM.  FICON increased this distance separation to ~100 KM, paving the way for the Geographically Dispersed Parallel Sysplex (GDPS) topologies we take for granted today.
  • Performance: ESCON performance was limited to 17 MB/S, whereas the first evolution of FICON channels delivered 100 MB/S full-duplex performance.

Clearly the first iteration of FICON technology delivered significant benefit to the IBM Mainframe User, and arguably is the primary Mainframe evolution that has sustained data growth and the adoption of Disaster Recovery and Business Continuity resiliency.  So, what does FICON offer today, some 15 years later?

Just as FICON superseded ESCON, FICON Express has now superseded FICON, offering a technology base that can continue to deliver benefit for many years to come.  FICON Express continues the tradition of offering more capabilities with each new generation of FICON channel.  The features were designed with the future in mind, while remembering the past, by supporting the data serving leadership of System z and enabling improved data access using High Performance functions (I.E. zHPF), while providing backwards compatibility, being able to auto-negotiate the link data rates of 2, 4 or 8 Gbps, namely the various FICON Expressn iterations (2/4/8).

High Performance FICON for System z (zHPF) is a data transfer protocol that is optionally deployed for accessing data from IBM Mainframe compatible storage subsystems (E.g. IBM DS8000, EMC Symmetrix V-Max, HDS USP, et al) and other subsystems.  Initially the data types supported were DB2, PDSE, VSAM, zFS and Extended Format SAM, and more latterly, legacy access methods including QSAM, BPAM and BSAM are now supported.  zHPF leverages the potential of FICON channels to deliver significant performance enhancements, and can help reduce the infrastructure costs for System z I/O by efficiently utilizing I/O resources, minimizing CHPID (Channels), Fiber (Cables), Switch Ports (E.g. Cisco, Brocade) and Control Unit (E.g. Disk Subsystem) resource requirements.  zHPF also compliments the Extended Address Volumes (EAV) strategy for growth, increasing I/O rate capability as the associated disk volume size increases.

The latest generation FICON Express8S channel has two possible modes of operation designed for connectivity to servers, switches/directors, disks, tapes and printers:

  1. CHPID Type FC: FICON, zHPF, and channel-to-channel (CTC) traffic for the z/OS, z/VM, z/VSE, z/TPF, and Linux on System z environments
  2. CHPID Type FCP: Fibre Channel Protocol (FCP) for attachment to SCSI devices for the z/VM, z/VSE, and Linux on System z environments

With FCP channel full fabric support, multiple switches/directors can be placed between the System z server and SCSI device, allowing many “hops” through a storage area network (SAN) and providing improved utilization of intersite-connected resources and infrastructure.  This may help to provide more choices for storage solutions or the ability to use existing storage devices and can help facilitate the consolidation of Distributed Systems servers (E.g. UNIX, Wintel) onto System z servers, protecting investments in SCSI-based storage.

I/O performance improvement rates for the initial iterations of FICON when compared to ESCON and then FICON Express when compared to FICON, and more latterly zHPF have been significant.  Using like-for-like benchmark performance studies, we can see significant performance improvements:

I/O Driver @ 4K Block Size – ~ I/Os Per Second

Channel Type

#I/Os per Sec

n:1 Increase

ESCON

1200

N/A

FICON Express
2/4 Native

14000

11.7

FICON Express
2/4 zHPF

31000

2.3

FICON Express
8 Native

20000

1.5

FICON Express
8 zHPF

52000

2.6

FICON Express
8S Native

23000

1.2

FICON Express
8S zHPF

92000

1.8

NB. Maximum performance is server related (E.g. z10, z114, z196, zEC2).

Compared to ESCON, the latest 8 Gbps FICON channel leveraging from zHPF function delivers ~76 times more I/O throughput compared to ESCON, while significantly increasing throughput, by at least 50% from generation to generation.

I/O Driver Mixed Read/Write – ~ MBs Per Second

Channel Type

#MBs per Sec

n:1 Increase

ESCON

12

N/A

FICON Express
4 Native

350

29.2

FICON Express
4 zHPF

620

1.8

FICON Express
8 Native

620

1.8

FICON Express
8 zHPF

770

1.3

FICON Express
8S Native

620

1.0

FICON Express
8S zHPF

1600

2.1

NB. Maximum performance is server related (E.g. z10, z114, z196, zEC2).

Compared to ESCON, the latest 8 Gbps FICON channel leveraging from zHPF function delivers ~133 times more I/O throughput compared to ESCON, while significantly increasing throughput, by at least 100% from generation to generation.

Once again, the backwards compatibility capability of the IBM Mainframe server is highlighted by the evolution of the FICON channel, and in particular, Disk Subsystems IHV’s, obviously IBM themselves, but notably EMC, HDS and Oracle (StorageTek) in evolving their offering to support the latest FICON technologies.

We sometimes might take for granted how much data can be stored by a single footprint IBM Mainframe and how much performance and throughput capability is available to process this data.  However, we shouldn’t under estimate what role FICON has played in allowing this significant data (I/O) processing capability to grow, often rapidly, sometimes exponentially.

If there is a downside, such performance attributes might have eradicated the skills required to tune I/O subsystems, but that’s perhaps a subject matter for another day…