FICON (Fibre Connection Channel): 15 Years of Mainframe I/O Improvements

In 1998, IBM introduced FICON channels for enhanced I/O connectivity and performance for their 9672 G5 processors, delivering significant capability when compared to its predecessor, ESCON.  Let’s not forget that ESCON (Enterprise Systems – S/390) was the first iteration of Fibre Channel for the IBM Mainframe, delivering significant capability, when compared with the previous technology of heavy, large and costly copper based bus & tag parallel (S/370) channels.

ESCON channels were first introduced in the early 1990’s, but after less than a decade, the data and associated storage device explosion was exposing the technical capabilities of ESCON, for example:

  • Mainframe Server Channel Support: One IBM Mainframe processor could only support 256 ESCON channels, whereas FICON was offering a ~5-8:1 reduction in channel requirements.  Put another way, a customer could expect to consolidate the number of channels required from ~200 ESCON to ~30-40 FICON.
  • Device Support: One ESCON channel could support up to 1024 devices (sub-channel/device numbers), channel, whereas a 9672 FICON channel increased support by 16 fold, up to 16,384 (16 K) devices.
  • Distance: The performance of ESCON dropped off significantly when the distance between the channel and associated Control Unit was greater than ~9 KM.  FICON increased this distance separation to ~100 KM, paving the way for the Geographically Dispersed Parallel Sysplex (GDPS) topologies we take for granted today.
  • Performance: ESCON performance was limited to 17 MB/S, whereas the first evolution of FICON channels delivered 100 MB/S full-duplex performance.

Clearly the first iteration of FICON technology delivered significant benefit to the IBM Mainframe User, and arguably is the primary Mainframe evolution that has sustained data growth and the adoption of Disaster Recovery and Business Continuity resiliency.  So, what does FICON offer today, some 15 years later?

Just as FICON superseded ESCON, FICON Express has now superseded FICON, offering a technology base that can continue to deliver benefit for many years to come.  FICON Express continues the tradition of offering more capabilities with each new generation of FICON channel.  The features were designed with the future in mind, while remembering the past, by supporting the data serving leadership of System z and enabling improved data access using High Performance functions (I.E. zHPF), while providing backwards compatibility, being able to auto-negotiate the link data rates of 2, 4 or 8 Gbps, namely the various FICON Expressn iterations (2/4/8).

High Performance FICON for System z (zHPF) is a data transfer protocol that is optionally deployed for accessing data from IBM Mainframe compatible storage subsystems (E.g. IBM DS8000, EMC Symmetrix V-Max, HDS USP, et al) and other subsystems.  Initially the data types supported were DB2, PDSE, VSAM, zFS and Extended Format SAM, and more latterly, legacy access methods including QSAM, BPAM and BSAM are now supported.  zHPF leverages the potential of FICON channels to deliver significant performance enhancements, and can help reduce the infrastructure costs for System z I/O by efficiently utilizing I/O resources, minimizing CHPID (Channels), Fiber (Cables), Switch Ports (E.g. Cisco, Brocade) and Control Unit (E.g. Disk Subsystem) resource requirements.  zHPF also compliments the Extended Address Volumes (EAV) strategy for growth, increasing I/O rate capability as the associated disk volume size increases.

The latest generation FICON Express8S channel has two possible modes of operation designed for connectivity to servers, switches/directors, disks, tapes and printers:

  1. CHPID Type FC: FICON, zHPF, and channel-to-channel (CTC) traffic for the z/OS, z/VM, z/VSE, z/TPF, and Linux on System z environments
  2. CHPID Type FCP: Fibre Channel Protocol (FCP) for attachment to SCSI devices for the z/VM, z/VSE, and Linux on System z environments

With FCP channel full fabric support, multiple switches/directors can be placed between the System z server and SCSI device, allowing many “hops” through a storage area network (SAN) and providing improved utilization of intersite-connected resources and infrastructure.  This may help to provide more choices for storage solutions or the ability to use existing storage devices and can help facilitate the consolidation of Distributed Systems servers (E.g. UNIX, Wintel) onto System z servers, protecting investments in SCSI-based storage.

I/O performance improvement rates for the initial iterations of FICON when compared to ESCON and then FICON Express when compared to FICON, and more latterly zHPF have been significant.  Using like-for-like benchmark performance studies, we can see significant performance improvements:

I/O Driver @ 4K Block Size – ~ I/Os Per Second

Channel Type

#I/Os per Sec

n:1 Increase

ESCON

1200

N/A

FICON Express
2/4 Native

14000

11.7

FICON Express
2/4 zHPF

31000

2.3

FICON Express
8 Native

20000

1.5

FICON Express
8 zHPF

52000

2.6

FICON Express
8S Native

23000

1.2

FICON Express
8S zHPF

92000

1.8

NB. Maximum performance is server related (E.g. z10, z114, z196, zEC2).

Compared to ESCON, the latest 8 Gbps FICON channel leveraging from zHPF function delivers ~76 times more I/O throughput compared to ESCON, while significantly increasing throughput, by at least 50% from generation to generation.

I/O Driver Mixed Read/Write – ~ MBs Per Second

Channel Type

#MBs per Sec

n:1 Increase

ESCON

12

N/A

FICON Express
4 Native

350

29.2

FICON Express
4 zHPF

620

1.8

FICON Express
8 Native

620

1.8

FICON Express
8 zHPF

770

1.3

FICON Express
8S Native

620

1.0

FICON Express
8S zHPF

1600

2.1

NB. Maximum performance is server related (E.g. z10, z114, z196, zEC2).

Compared to ESCON, the latest 8 Gbps FICON channel leveraging from zHPF function delivers ~133 times more I/O throughput compared to ESCON, while significantly increasing throughput, by at least 100% from generation to generation.

Once again, the backwards compatibility capability of the IBM Mainframe server is highlighted by the evolution of the FICON channel, and in particular, Disk Subsystems IHV’s, obviously IBM themselves, but notably EMC, HDS and Oracle (StorageTek) in evolving their offering to support the latest FICON technologies.

We sometimes might take for granted how much data can be stored by a single footprint IBM Mainframe and how much performance and throughput capability is available to process this data.  However, we shouldn’t under estimate what role FICON has played in allowing this significant data (I/O) processing capability to grow, often rapidly, sometimes exponentially.

If there is a downside, such performance attributes might have eradicated the skills required to tune I/O subsystems, but that’s perhaps a subject matter for another day…