IBM Z Mainframe VTL End Of Support (EOS): A Problem Or Opportunity?

For ~20 years, since 1996 when IBM announced their IBM TotalStorage Virtual Tape Server Model B16 (3494-B16), typically known as the VTS, followed by the StorageTek (Oracle) Virtual Storage Manager (VSM) in 1998, there has been evident IBM Mainframe VTL innovation and product line refreshes, offering a granularity of offerings for all users, regardless of size.  The consolidation of the IBM Mainframe VTL marketplace in the ~2017-2019 period is notable.  IBM have consolidated their options to the high-end TS7760, retiring their TS7720 and TS7740 models. Similarly, Oracle have also delivered significant performance and enhancements to their VSM offering, where the latest VSM 7 delivers significant resource when compared with the VSM 6 and older predecessors (NB. The VSM 6 platform replaced the proprietary VSM 5 platform with Sun servers & Sun JBOD disk storage).  Similarly, EMC have consolidated their DLm offerings to the DLm8500, retiring their DLm1000, DLm1020, DLm2000, DLm2100, DLm6000 and DLm8000 models.

A high-level review of the mainstream market place offerings, namely EMC DLm8500, IBM TS7760 and Oracle VSM 7 demonstrates Enterprise Class VTL solutions, delivering significant availability, capacity and performance capabilities, mandatory for the higher echelons of IBM Z Mainframe user.  Conversely, it follows that such attributes and associated cost become somewhat of a concern for the small to medium sized IBM Mainframe user.  When any product becomes End Of Support (EOS), End Of Life (EOL) or even End Of Marketing (EOM), the viability and associated TCO becomes a consideration.  Typically, there are several options to address such an issue:

  • Do nothing (because we’re decommissioning the IBM Mainframe sometime soon)
  • Secure a long-term support contract (E-g. 3-5 years) ASAP, to reduce increasing support costs
  • Perform a technology refresh to the latest supported supplier offering
  • Review the marketplace and migrate to a more suitable supported solution

Only the incumbent IBM Mainframe VTL user can decide the best course action for their organization, but from a dispassionate viewpoint, reviewing these respective options generates the following observations:

  • Do nothing: The cost of doing nothing is always expensive. The perpetual “we’re moving away from the IBM Mainframe in the next 3-5 years” might have been on many “to-do” lists, for decades”!  The IBM Mainframe platform is strategic!
  • Long-term support contract: This delays the inevitable and potentially generates data availability challenges, as the equipment ages and potentially becomes more unreliable, with limited or expensive OEM support.
  • Technology refresh: In theory, the best option, upgrading the incumbent technology to the latest offering. In this instance, the cost might be significant for the small to medium sized user, as EMC, IBM and Oracle no longer offer “entry to medium-sized” solutions.
  • Migrate: By definition migration is perceived as introducing risk, migrating from a tried and tested to a new solution. However, generally the best products come from suppliers with a focus on their flagship solution, as opposed to a large company, with many offerings…

The IBM Mainframe VTL marketplace does include other suppliers, including FUJITSU, LUMINEX, Visara, naming but a few, and one must draw one’s own conclusions as to their respective merits.  What is always good is a new marketplace entrant, with a credible offering, a different approach or demonstrable expertise.

Optica Technologies is a privately held technology company headquartered in Louisville, Colorado, USA. Optica have been providing high-quality data centre infrastructure solutions since 1967. Optica has been an IBM strategic partner since 2002 and has received the most extensive IBM qualification available for third party solutions. Optica products have been successfully deployed in many major enterprise data centres worldwide.

The Optica Prizm FICON to ESCON Protocol Converter designed to enable IBM mainframe customers to invest in the latest System Z platforms (I.E. zEC12/zBC12 upwards), while preserving the ability to connect to critical ESCON and Bus/Tag device types that remain.

The next generation zVT Virtual Tape Node (VTN) exploits the latest Intel server technology, delivering outstanding performance, resiliency and scalability to serve a broad range of IBM Z customers. Each zVT VTN is modular and packaged efficiently with (2) FICON channels in an industry standard 2U rack format. The zVT VTN supports up to 512 3490/3590 Virtual Tape Drive (VTD) resources, delivering ~500 MB/S performance for the typical IBM Mainframe tape workload. As per some of the architectural design characteristics of the IBM Z Mainframe server (I.E. z13, z14), the zVT VTN server is enabled for operation in warmer environments than traditional data centres and engineered for extreme conditions such as high humidity, earthquakes and dust. To support the diversity of IBM Z Mainframe customer environments, from the smallest to largest, the flexible zVT solution is available in three different formats:

  • zVT 3000i: for IBM Mainframe users with more limited requirements, the fully integrated zVT 3000i model leverages the same Enterprise Class zVT VTN, incorporating 16 Virtual Tape Drive (VTD) resources and 8 TB of RAID-6 disk capacity, delivering 20 TB of effective capacity via the onboard hardware compression card (2.5:1 compression). The fundamental cost attributes of the zVT 3000i make a very compelling argument for those customers on a strict budget, requiring an Enterprise Class IBM Mainframe storage solution.
  • zVT 5000-iNAS: the flagship zVT 5000-iNAS solution is available in a fully redundant, high availability (HA) base configuration that combines (2) VTNs and (2) Intelligent Storage Nodes (ISNs). The entry-level zVT 5000-iNAS HA offering incorporates 512 (256 per VTN) Virtual Tape Drive (VTD) resources, delivering ~1 GB/Sec performance, 144 TB RAW and ~288 TB of effective capacity using a conservative 4:1 data reduction metric. zVT 5000-iNAS can scale to a performance rating of ~4 GB/Sec and capacity in excess of 11 PB RAW.
  • zVT 5000-FLEX: For IBM Mainframe users wishing to leverage their investments in IP (NFS) or FC (SAN) disk arrays, the zVT 5000-FLEX offering can be configured with (2) 10 GbE (1 GbE option) or (2) 8 Gbps Fibre Channel ports. Virtual Tape Drive (VTD) flexibility is provided with VTD options of 16, 64 or 256, while onboard hardware compression safeguards optimized data reduction.  Enterprise wide DR is simplified, as incumbent Time Zero (E.g. Flashcopy, Snapshot, et al) functions can be utilized for IBM Mainframe tape data.

In summary, Optica zVT reduces the IBM Mainframe VTL technology migration risk, when considering the following observations:

  • Technical Support: With 50+ years IBM Mainframe I/O connectivity experience, Optica have refined their diagnostics collection and processing activities, safeguarding rapid problem escalation and rectification, with Level 1-3 experts, located in the same geographical location.
  • Total Cost of Acquisition (TCA): zVT is a granular, modular and scalable solution, with a predictable, optimized and granular cost metric, for the smallest to largest of IBM Mainframe user, regardless of IBM Z Operating System.
  • Total Cost of Ownership (TCO): Leveraging from the latest software and hardware technologies and their own streamlined support processes, Optica deliver world class cradle-to-grave support for an optimized on-going cost.
  • Flexibility: Choose from an all-in-one solution for the smallest of users (I.E. zVT 3000i), a turnkey high-availability solution for simplified optimized usage (I.E. zVT 5000-iNAS) and the ability to leverage from in-house disk storage resources (I.E. zVT 5000-FLEX).
  • Simplified Migration: A structured approach to data migration, simplifying the transition from the incumbent VTL solution to zVT. zVT also utilizes the standard AWSTAPE file format, meaning data migration from zVT is simple, unlike the proprietary AWS file formats used by other VTL offerings.

In conclusion sometimes End Of Support (EOS) presents an opportunity to review the incumbent solution and consider a viable alternative and in the case of an IBM Mainframe VTL, for the small to medium sized user especially, having a viable target option, might just allow an organization to maintain, if not improve their current IBM Mainframe VTL expenditure profile…

The Ever Changing IBM Z Mainframe Disaster Recovery Requirement

With a 50+ year longevity, of course the IBM Z Mainframe Disaster Recovery (DR) requirement and associated processes have changed and evolved accordingly.  Initially, the primary focus would have been HDA (Head Disk Assembly) related, recovering data due to hardware (E.g. 23nn, 33nn DASD) failures.  It seems incredulous in the 21st Century to consider the downtime and data loss with such an event, but these failures were commonplace into the early 1980’s.  Disk drive (DASD) reliability increased with the 3380 device in the 1980’s and the introduction of the 3990-03 Dual Copy capability in the late 1980’s eradicated the potential consequences of a physical HDA failure.

The significant cost of storage and CPU resources dictated that many organizations had to rely upon 3rd party service providers for DR resource provision.  Often this dictated a classification of business applications, differentiating between Mission Critical or not, where DR backup and recovery processes would be application based.  Even the largest of organizations that could afford to duplicate CPU resource, would have to rely upon the Ford Transit Access Method (FTAM), shipping physical tape from one location to another and performing proactive or more likely reactive data restore activities.  A modicum of database log-shipping over SNA networks automated this process for Mission Critical data, but successful DR provision was still a major consideration.

Even with the Dual Copy function, this meant DASD storage resources had to be doubled for contingency purposes.  Therefore this dictated only the upper echelons of the business world (I.E. Financial Organizations, Telecommunications Suppliers, Airlines, Etc.) could afford the duplication of investment required for self-sufficient DR capability.  Put simply, a duplication of IBM Mainframe CPU, Network and Storage resources was required…

The 1990’s heralded a significant evolution in generic IT technology, including IBM Mainframe.  The adoption of RAID technology for IBM Mainframe Count Key Data (CKD) provided an affordable solution for all IBM Mainframe users, where RAID-5(+) implementations became commonplace.  The emergence of ESCON/FICON channel connectivity provided the extended distance requirement to complement the emerging Parallel SYSPLEX technology, allowing IBM Mainframe servers and related storage to be geographically dispersed.  This allowed a greater number of IBM Mainframe customers to provision their own in-house DR capability, but many still relied upon physical tape shipment to a 3rd party DR services provider.

The final significant storage technology evolution was the Virtual Tape Library (VTL) structure, introduced in the mid-1990’s.  This technology simplified capacity optimization for physical tape media, while reducing the number of physical drives required to satisfy the tape workload.  These VTL structures would also benefit from SYSPLEX implementations, but for many IBM Mainframe users, physical tape shipment might still be required.  Even though the IBM Mainframe had supported IP connectivity since the early 1990’s, using this network capability to ship significant amounts of data was dependent upon public network infrastructures becoming faster and more affordable.  In the mid-2000’s, transporting IBM Mainframe backup data via extended network carriers, beyond the limit of FICON technologies became more commonplace, once again, changing the face of DR approaches.

More recently, the need for Grid configurations of 2, 3 or more locations has become the utopia for the Global 1000 type business organization.  Numerous copies of synchronized Mission Critical if not all IBM Z Mainframe data are now maintained, reducing the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) DR criteria to several Minutes or less.

As with anything in life, learning from the lessons of history is always a good thing and for each and every high profile IBM Z Mainframe user (E.g. 5000+ MSU), there are many more smaller users, who face the same DR challenges.  Just as various technology races (E.g. Space, Motor Sport, Energy, et al) eventually deliver affordable benefit to a wider population, the same applies for the IBM Z Mainframe community.  The commonality is the challenges faced, where over the years, DR focus has either been application or entire business based, influenced by the technologies available to the IBM Mainframe user, typically dictated by cost.  However, the recent digital data explosion generates a common challenge for all IT users alike, whether large or small.  Quite simply, to remain competitive and generate new business opportunities from that priceless and unique resource, namely business data, organizations must embrace the DevOps philosophy.

Let’s consider the frequency of performing DR tests.  If you’re a smaller IBM Z Mainframe user, relying upon a 3rd party DR service provider, your DR test frequency might be 1-2 tests per year.  Conversely if you’re a large IBM z Mainframe user, deploying a Grid configuration, you might consider that your business no longer has the requirement for periodic DR tests?  This would be a dangerous thought pattern, because it was forever thus, SYSPLEX and Grid configurations only safeguard from physical hardware scenarios, whereas a logical error will proliferate throughout all data copies, whether, 2, 3 or more…

Similarly, when considering the frequency of Business Application changes, for the archetypal IBM Z Mainframe user, this might have been Monthly or Quarterly, perhaps with imposed change freezes due to significant seasonal or business peaks.  However, in an IT ecosystem where the IBM Z Mainframe is just another interconnected node on the network, the requirement for a significantly increased frequency of Business Application changes arguably becomes mandatory.  Therefore, once again, if we consider our frequency of DR tests, how many per year do we perform?  In all likelihood, this becomes the wrong question!  A better statement might be, “we perform an automated DR test as part of our Business Application changes”.  In theory, the adoption of DevOps either increases the frequency of scheduled Business Application changes, or organization embraces an “on demand” type approach…

We must then consider which IT Group performs the DR test?  In theory, it’s many groups, dictated by their technical expertise, whether Server, Storage, Network, Database, Transaction or Operations based.  Once again, if embracing DevOps, the Application Development teams need to be able to write and test code, while the Operations teams need to implement and manage the associated business services.  In such a model, there has to be a fundamental mind change, where technical Subject Matter Experts (SME) design and implement technical processes, which simplify the activities associated with DevOps.  From a DR viewpoint, this dictates that the DevOps process should facilitate a robust DR test, for each and every Business Application change.  Whether an organization is the largest or smallest of IBM Z Mainframe user is somewhat arbitrary, performing an entire system-wide DR test for an isolated Business Application change is not required.  Conversely, performing a meaningful Business Application test during the DevOps code test and acceptance process makes perfect sense.

Performing a meaningful Business Application DR test as part of the DevOps process is a consistent requirement, whether an organization is the largest or smallest IBM Z Mainframe user.  Although their hardware resource might differ significantly, where the largest IBM Z Mainframe user would typically deploy a high-end VTL (I.E. IBM TS77n0, EMC DLm 8n00, Oracle VSM, et al), the requirement to perform a seamless, agile and timely Business Application DR test remains the same.

If we recognize that the IBM Z Mainframe is typically deployed as the System Of Record (SOR) data server, today’s 21st century Business Application incorporates interoperability with Distributed Systems (E.g. Wintel, UNIX, Linux, et al) platforms.  In theory, this is a consideration, as mostly, IBM Z Mainframe data resides in proprietary 3390 DASD subsystems, while Distributed Systems data typically resides in IP (NFS, NAS) and/or FC (SAN) filesystems.  However, the IBM Z Mainframe has leveraged from Distributed Systems technology advancements, where typical VTL Grid configurations utilize proprietary IP connected disk arrays for VTL data.  Ultimately a VTL structure will contain the “just in case” copy of Business Application backup data, the very data copy required for a meaningful DR test.  Wouldn’t it be advantageous if the IBM Z Mainframe backup resided on the same IP or FC Disk Array as Distributed Systems backups?

Ultimately the high-end VTL (I.E. IBM TS77n0, EMC DLm 8n00, Oracle VSM, et al) solutions are designed for the upper echelons of the business and IBM Z Mainframe world.  Their capacity, performance and resilience capability is significant, and by definition, so is the associated cost.  How easy or difficult might it be to perform a seamless, agile and timely Business Application DR test via such a high-end VTL?  Are there alternative options that any IBM Z Mainframe user can consider, regardless of their size, whether large or small?

The advances in FICON connectivity, x86/POWER servers and Distributed Systems disk arrays has allowed for such technologies to be packaged in a cost efficient and small footprint IBM Z VTL appliance.  Their ability to connect to the IBM Z server via FICON connectivity, provide full IBM Z tape emulation and connect to ubiquitous IP and FC Distributed Systems disk arrays, positions them for strategic use by any IBM Z Mainframe user for DevOps DR testing.  Primarily one consistent copy of enterprise wide Business Application data would reside on the same disk array, simplifying the process of recovering Point-In-Time backup data for DR testing.

On the one hand, for the smaller IBM Z user, such an IBM Z VTL appliance (E.g. Optica zVT) could for the first time, allow them to simplify their DR processes with a 3rd party DR supplier.  They could electronically vault their IBM Z Mainframe backup data to their 3rd party DR supplier and activate a totally automated DR invocation, as and when required.  On the other hand, moreover for DevOps processes, the provision of an isolated LPAR, would allow the smaller IBM Z Mainframe user to perform a meaningful Business Application DR test, in-house, without impacting Production services.  Once again, simplifying the Business Application DR test process applies to the largest of IBM Z Mainframe users, and leveraging from such an IBM Z VTL appliance, would simplify things, without impacting their Grid configuration supporting their Mission critical workloads.

In conclusion, there has always been commonality in DR processes for the smallest and largest of IBM Z Mainframe users, where the only tangible difference would have been budget related, where the largest IBM Z Mainframe user could and in fact needed to invest in the latest and greatest.  As always, sometimes there are requirements that apply to all, regardless of size and budget.  Seemingly DevOps is such a requirement, and the need to perform on-demand seamless, agile and timely Business Application DR tests is mandatory for all.  From an enterprise wide viewpoint, perhaps a modicum of investment in an affordable IBM Z VTL appliance might be the last time an IBM Z Mainframe user needs to revisit their DR testing processes!

System z: I/O Interoperability Evolution – From Bus & Tag to FICON

Since the introduction of the S/360 Mainframe in 1964 there has been a gradual evolution of I/O connectivity that has taken us from copper Bus & Tag to fibre ESCON and now FICON channels.  Obviously during this ~50 year period there have been exponentially more releases of Mainframe server and indeed Operating System.  In this timeframe there have been 2 significant I/O technology milestones.  Firstly, in 1990, ESCON was part of the significant S/390 announcement (MVS/ESA), where migration to ESCON was a great benefit, if only for replacing the heavy and big copper Bus & Tag channels.  Secondly, even though FICON was released in the late 1990’s, in 2009 IBM announced that the z10 would be the last Mainframe server to support greater than 240 native ESCON channels.  Similarly IBM declared that the last zEnterprise server to support ESCON channels are the z196 and z114 servers.  Each of these major I/O evolutions required a migration philosophy and not every I/O device would be upgraded to support either native ESCON of FICON channels.  How did customers achieve these mandatory I/O upgrades to safeguard IBM Mainframe Server and associated Operating System longevity?

In 2009 it was estimated ~20% of all Mainframe customers were using ESCON only I/O infrastructures, while only ~20% of all Mainframe customers were deploying a FICON only infrastructure.  Similarly ~33% of z9 and z10 systems were shipped with ESCON CVC (Block Multiplexor) and CBY (Byte Multiplexor) channels defined, while ~75% of all Mainframe Servers had native ESCON (CNC) capability.  From a dispassionate viewpoint, clearly the migration from ESCON to FICON was going to be a significant challenge, while even in this timeframe, there was still use of Bus & Tag channels…

One of the major strengths of the IBM Mainframe ecosystem is the partner network, primarily software (ISV) based, but with some significant hardware (IHV) providers.  From a channel switch viewpoint, we will all be familiar with Brocade, Cisco and McData, where Brocade acquired McData in 2006.  However, from a channel protocol conversion viewpoint, IBM worked with Optica Technologies, to deliver a solution that would allow the support for ESCON and Bus & Tag channels to the FICON only zBC12/zEC12 and future Mainframe servers (I.E. z13, z13s).  Somewhat analogous to the smartphone where the user doesn’t necessarily know that an ARM processor might be delivering CPU power to their phone, sometimes even seasoned Mainframe professionals might inadvertently overlook that the Optica Technologies Prizm solution has been or indeed is still deployed in their System z Data Centre…

When IBM work with a partner from an I/O connectivity viewpoint, clearly IBM have to safeguard that said connectivity has the highest interoperability capability with bulletproof data exchange attributes.  Sometimes we might take this for granted with the ubiquitous disk and tape subsystem suppliers (I.E. EMC, HDS, IBM, Oracle), but for FICON conversion support, Optica Technologies was a collaborative partner for IBM.  Ultimately the IBM Hardware Systems Assurance labs deploy their proprietary System Assurance Kernel (SAK) processes to safeguard I/O subsystem interoperability for their System z Mainframe servers.  Asking that rhetorical question; when was the last time you asked your IHV for site of their System Assurance Kernel (SAK) exit report from their collaboration with IBM Hardware Systems Assurance labs for their I/O subsystem you’re considering or deploying?  In conclusion, the SAK compliant, elegant, simple and competitively priced Prizm solution allowed the migration of tens if not hundreds of thousands of ESCON connections in thousands of Mainframe data centres globally!

With such a rich heritage of providing a valuable solution to the global IBM Mainframe install base, whether the smallest or largest, what would be next for Optica Technologies?  Obviously leveraging from their expertise in FICON channel support would be a good way forward.  With the recent acquisition of Bus-Tech by EMC and the eradication of the flexible MDL tapeless virtual tape offering, Optica Technologies are ideally placed to be that small, passionate and eminently qualified IHV to deliver a turnkey virtual tape solution for the smaller and indeed larger System z user.  The Optica Technologies zVT family leverages from the robust and heritage class Prizm technology, delivering an innovative family of virtual tape solutions.  The entry “Virtual Tape In A Box” zVT 3000i provides 2 FICON channel interfaces and 4 TB uncompressed internal RAID-5 disk space, seamlessly interfacing with all System z supported tape devices (I.E. 3490, 3590) and processes.  A single enterprise class zVT 5000-iNAS node delivers 2 FICON channel interfaces, NFS storage capacity from 8TB to 1PB in a single frame with standard deduplication, compression, replication and encryption features.  The zVT 5000-iNAS is available with multi-node configuration support for additional scalability and resiliency.  For those customers wishing to deploy their own choice of NFS or FC storage subsystem, the zVT 5000-FLEX allows such connectivity accordingly.

In conclusion, sometimes it’s all too easy to take some solutions for granted, when they actually delivered a tangible and arguably priceless solution in the evolution of your organizations System z Mainframe server journey from ESCON, if not Bus & Tag to FICON.  Perhaps the Prizm solution is one of these unsung products?  Therefore, the next time you’re reviewing the virtual tape market place, why wouldn’t you seriously consider Optica Technologies, given their rich heritage in FICON channel interoperability?  Given that IBM chose Optica Technologies as their strategic partner for ESCON to FICON migration, seemingly even IBM might have thought “nobody gets fired for choosing…”!

zHyperLink: Just Another System z DASD I/O Function Enhancement?

Over the last several decades or so the IBM Mainframe platform has delivered several new technologies that have dramatically improved the performance of disk (DASD) I/O performance.  Specifically the deployment of ESCON as the introduction to Fibre Optical channels, followed by EMIF for channel sharing and reduced I/O protocol, superseded by FICON and most recently zHPF.  All of these technologies have allowed for ever larger amounts of data to be processed by the System z server and the adoption of Geographically Dispersed Parallel Sysplex (GDPS) implementations for business continuity reasons.  Ultimately mission critical data and decisions are facilitated by applications and sub-second response times for these transactions is expected.  Some might say that we’re always running to stand still from a performance perspective when implementing the latest System z technologies?

In reality, today’s 21st Century mission-critical application is not just capturing and storing customer data, it’s doing so much more, attempting to make informed business decisions for a richer customer experience!  Historically a customer transaction would be on a one-to-one basis (E.g. ask for a balance query), whereas today, said transaction might generate more data for the customer, potentially offering them a new or enhanced product.  In theory, this informed and intelligent transaction processing delivers a richer experience for the customer and potentially new revenue opportunities for the business.

For several years IBM have integrated the Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative into their product offerings, recognising that a business transaction can originate from the cloud or a mobile device, potentially via a Social Media platform, require rich processing via real-time analytics, while requiring the highest levels of security.  Of course, one must draw one’s own conclusions, but maintaining sub-second or ultra-fast transaction response times, with this level of CAMSS complexity requires significant performance enhancements.  To deliver such ultra-fast response times requires the DASD I/O subsystem to maintain the highest levels of performance, aligned with the latest System z server platform…

In January 2017 IBM issued a Statement of Direction (SoD) and associated FAQ for their zHyperLink technology.  zHyperLink is a new short distance mainframe link technology designed for up to 10 times lower latency than zHPF.  zHyperLink is intended to accelerate DB2 for z/OS transaction processing and improve active log throughput.  IBM intends to deliver field upgradable support for zHyperLink on the existing IBM DS8880 storage subsystem.  zHyperLink technology is a new mainframe attach link.  It is the result of collaboration between DB2 for z/OS, the z/OS operating System, IBM System z servers and the DS8880 storage subsystem to deliver extreme low latency I/O access for DB2 for z/OS applications.  zHyperLink technology is intended to complement FICON technology, accelerating those I/O requests that are typically used for transaction processing.  These links are point-to-point connections between the System z CEC and the storage system and are limited to 150 meter distances.  These links do not impact the z Architecture 8 channel path limit.

From a DB2 I/O service performance perspective viewpoint, at short distances, a native FICON or zHPF originated I/O typically requires 300 Microseconds (μs) for a simple I/O operation.  The coupling facility for z Systems typically can read or write 4K of data in in under 8 Microseconds.  zHyperLink technology will provide a new short distance link from the mainframe to storage to read and write data up to 10 times faster than FICON or zHPF; reducing DB2 I/O service times to an anticipated 20-30 Microseconds.

In conclusion, with a promise of 10 times faster processing, as per its fibre optic channel technology predecessors, particularly EMIF and zHPF, zHyperLink is a revolutionary DASD I/O function and not just another DASD I/O subsystem function enhancement.  At this stage, the deployment of zHyperLink functionality is restricted to DB2 and the IBM DS8880 storage subsystem, while we eagerly await compatibility support from EMC and HDS accordingly.  Moreover, as per the evolution of zHPF, we hope for the inclusion of other I/O workloads to benefit from this paradigm changing I/O response time technology.

Finally, as always, the realm of possibility always exists for each and every System z DASD I/O subsystem to be monitored and tuned on a proactive and 24*7*365 basis.  Although all of this DASD I/O performance data has always been and still is captured by RMF (CMF) data, intelligent processing of this data requires an ever evolving Performance Management process and arguably an intelligent software solution (E.g. IntelliMagic Direction Disk Magic or Technical Storage Easy Analyze Disk Mainframe) to provide meaningful information and business decisions from ever increasing amounts of RMF (CMF) data.  In November 2016 ago I delivered the DASD I/O Performance Management Is Easy? session at the UK GSE Annual 2016 meeting accordingly…

All Flash & Substance – Is The System z Microsecond The New Millisecond?

Is 2016 the year of the All Flash disk array?  Seemingly from a System z perspective, 2016 has seen improvement in the All Flash disk array offerings from the major disk suppliers, namely EMC, HDS and IBM.  From a usability perspective, managing latency might be the overall challenge, where these ultra-fast SSD systems are delivering I/O performance response times measured in the ~250-500 Microsecond (μs) range, potentially consigning the traditional Millisecond (ms) measurement to history!

Whatever the speeds and feeds might be, as of 2016, the benchmark for a System z All Flash Disk Array is seemingly measured @ <500 Microsecond μs response time, supporting ~n PB of capacity and delivering ~nnn GB/S throughput for mixed read/write workloads.  Of course, strong encryption, typically full disk Data @ Rest Encryption (D@RE) based and full seamless data replication interoperability are also mandatory.

Historically we evolved from Data Processing to Information Technology, not only automating the capture and processing of data, but gradually evolving our processes, using this data for business advantage.  In recent years, the information explosion has dictated that each and every business must be a cognitive business, using intelligent analytics to gain insight and faster decision-making from the business data collected.

Currently the Internet of Things (IoT) supplements the medium-term Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative, being the processes and associated solutions required by cognitive businesses to make timely and informed decisions, capturing deeper customer insight, ultimately delivering competitive advantage.  Therefore the 21st century business generates a significant requirement for storage capacity and performance to fully realize the benefits of this truly business aligned cognitive approach.

The largest global organizations from all verticals leverage from the power and true 24*7*365 availability and reliability of the System z Mainframe to power enormous relational databases, processing millions of customer transactions on an hourly basis.  These always-on, mission critical business environments demand the performance, reliability, TCO and System z platform integration delivered by the associated DASD (3390) subsystem.

Each and every System z user will have their IHV of choice for delivering disk storage, in alphabetical order, EMC (E.g. VMAX AFA/All Flash Array), HDS (HAF/Hitachi Accelerated Flash) or IBM (E.g. DS8888).  The choice of disk storage was forever thus, reviewing the market place and choosing the best option for your business.  What might require reflection is how the DASD I/O subsystem is managed and the associated interaction with said IHV supplier.  Systems Management solutions such as Easy Disk Analyze Mainframe (EADM) and IntelliMagic Vision (Disk Magic) will certainly simplify the analysis and presentation of DASD subsystem performance data.

However the emphasis of the actual System z DASD subsystem for an All Flash array might move from being an internal consideration, to a direct and timely communication with the IHV supplier.  Put very simply, in an environment where Mission Critical systems rely upon ultra fast processing of massive amounts of data, any flash memory issues, whether capacity or defect related will need IHV interaction ASAP, arguably “Before The Event”.  As with the System z Server itself, where we’re used to On/Off Capacity on Demand (OOCoD) processes, maybe we need to consider a similar approach with our All Flash System z DASD arrays.  For the avoidance of doubt, as opposed to waiting for an issue to impact our business, maybe we need to work smarter with our IHV, to safeguard that sufficient flash memory is available, to proactively resolve capacity or defect issues…

Aligning this with our traditional 3390 DASD I/O subsystem analysis, which might have been on a daily basis, from the rich RMF/CMF data resource, we must fully automate this process to minimize or eliminate the Mean Time To Resolution (MTTR)!  The ultimate benefit will be the delivery of meaningful messages that incorporate our 3rd party IHV supplier, who potentially with Remote Support Facility (RSF) type processes, deploys the “Golden Screwdriver” to seamlessly safeguard the performance profiles of our Mission Critical business applications, leveraging from the latest All Flash disk array.

In conclusion, as always, technology can deliver business benefits, with substance, and this includes All Flash disk arrays.  As always, what might need to evolve are the associated Systems Management processes.  Therefore asking yet another potential rhetorical question, what is more important, the System z Server or timely data access?  The diplomatic answer is that they’re equally important and if so, let’s safeguard the availability of All Flash memory for our DASD subsystems, with the requisite levels of meaningful proactive reporting and IHV supplier interaction.

System z: Optimizing DASD I/O Subsystem Performance

Historically there was a very simple synergy between the IBM S/370 Mainframe and its supporting disk I/O (DASD) subsystem, allowing for Mainframe host to physical and logical disk device (I.E. 3390) connectivity. The analysis and tuning of this I/O subsystem has always been and continues to be supported by the SMF Type 7n records via IBM RMF and the BMC CMF alternative. However, over the years, major advances in DASD subsystems and the System z Mainframe server have delivered many layers of technology resources (E.g. Cache, Memory, FICON Channels, RAID Storage, Proprietary Microcode, et al) and this has introduced complexities into highlighting DASD I/O subsystem performance problems.

The focus of technology based metrics (E.g. I/O Rate Response Time, I/O MB/S Bandwidth, et al) have also been complemented with more meaningful business focussed Service Level Agreements (SLA). Therefore today’s System z I/O Performance Analyst must gather and act upon proactive meaningful information from the ever-increasing amounts of performance data available. Put another way, too much data can deliver not enough information! As previously stated, it was forever thus, RMF and CMF have always collected the requisite performance data available and arguably no other data source is required (E.g. OMEGAMON/TMON/SYSVIEW Performance Monitor, SAS/MXG/MICS/WPS Performance Database). RMF/CMF is the ideal data source for thorough and timely System z I/O performance management, where intelligent analytics and expert knowledge are required to present this “Golden Record”.

However, today’s System z Support Teams need simple and timely presentation of the data, highlighting potential challenges, graphically presented for their Management, allowing for simple tracking of SLA agreements and technology changes (I.E. Software/Hardware Upgrades).

Additionally, Workload Manager (WLM) can control non-paging queued DASD I/O requests, based upon device busy conditional processing. Therefore the z/OS system can manage I/O priorities in a Sysplex, based on WLM service class goals. WLM dynamically adjusts the I/O priority based on service class goal performance and whether a DASD device can influence the overall performance objectives. For obvious reasons, this WLM function does not micro-manage I/O priorities, only changing a service class period’s I/O priority infrequently. WLM is deployed by many System z users to assist in the automated management of system resources (E.g. CPU, Memory, I/O, et al), based upon Service Level goals.

From a DASD subsystem technology viewpoint, there is no longer an obvious one-one direct connection between the Mainframe host and DASD device. An increasing number of technological advances, both microcode and hardware (E.g. Memory, Fibre Channel, Function Assist Processing, et al) have diminished the requirement for data access directly from the physical device. Put another way, in today’s world of System z servers with multiple cache level CPU chips (I.E. Relative Nest Intensity), massive and multiple processor memory resources (I.E. z13 @ 10 TB Memory), high bandwidth Fibre Channel (I.E. FICON, zHPF) subsystem and a hierarchy of DASD memory (I.E. SSD/Flash, Cache), it’s not uncommon to consider an I/O that requires physical device access as a problem! Finally and most importantly, from a DASD subsystem viewpoint, each of the recognized System z DASD providers, EMC (Symmetrix VMAX), HDS (VSP G1000) and IBM (DS8870) have highly proprietary DASD subsystems that provide z/OS plug compatibility, but deliver overall I/O performance using their own unique architecture and internal algorithms.

Of course, an over configured hardware environment will deliver a poor TCO, while an under configured environment will manifest in SLA issues and bad user experiences, where the middle-ground always delivers the optimal environment. Resource optimization always demands proactive day-to-day management, from an internal and indeed external communication viewpoint. With the highly proprietary design features of the IHV DASD subsystems, whether EMC, HDS or IBM, having the right information and identifying the precise problem, simplifies the communication process with the IHV. Such communication might highlight a resource under provision (E.g. Memory Capacity), a subsystem setting tweak requirement, either host or subsystem based, or indeed a hardware failure. In today’s world, these issues need to be fixed in minutes or hours, not days or weeks.

Therefore, where does today’s System z I/O Performance Analyst start to collect the required information to safeguard that their DASD subsystem is optimized, both from a capacity and performance viewpoint?

A simplistic viewpoint of an I/O health-check should consider the following:

  • Service Level Agreements (SLA): Are overall objectives being delivered or missed?
  • User Experience: Are users (customers) complaining of poor service or response times?
  • I/O Metric Performance: Are there obvious signs of abnormal performance statistics?

Several decades ago, an overall I/O health check might have been a periodic (E.g. Weekly or longer) activity, whereas today it’s undoubtedly a Business As Usual (BAU) and 24*7 activity. Therefore a fully automated solution is required, built upon the tried and tested System z performance fundamentals, namely RMF or CMF. The ideal solution will perform analytics based data reduction, presenting the right information, at the right time, allowing for intelligent business based communication, both internally, to customers and end users from an SLA viewpoint, and externally, with IHV DASD suppliers, safeguarding optimal performance and TCO.

EADM (Easy Analyze DASD Mainframe) is a solution from Technical Storage that performs automated performance analysis of the z/OS I/O subsystem, delivering predictive analytics for better storage capacity planning and performance measurement. The Technical Storage EADM architects have in excess of 40 years IBM Mainframe experience, specializing in the I/O subsystem, and so it’s no surprise that EADM delivers expert and timely knowledge via an easy-to-use solution.

EADM is an easy-to-install and easy-to-use plug-and-play solution that has no proprietary considerations, requiring no additional System z resource (E.g. CPU, Memory, DASD, et al) requirements. Installed on Microsoft server platforms, EADM is easily virtualized via VMware, Hyper-V, et al, requiring no target database for performance data storage. EADM performs a daily health check of the entire System z disk subsystem. EADM works around the clock, delivering customized and automatic user friendly GUI type reports. For today’s System z technician, the open and IP architecture base of EADM allows for secure remote access via Mobile, Tablet or Laptop devices, as and when required.

Operations and performance teams are alerted as soon as performance variances occur, typically in minutes, assisting in the identification of underlying root problems, causing changes in system behaviour. Incorporating intelligent and meaningful I/O performance indicators, with drill-down and zoom-in ability, storage technicians can determine if the problem is temporary, permanent, local or global. By simplifying the data reduction process (E.g. RMF/CMF data from numerous LPAR/Sysplex environments), EADM safeguards that the internal technical team can efficiently manage their ever increasingly complex and large DASD environment, for intelligent and timely communications with internal business teams and external suppliers alike.

EADM simplifies the System z I/O subsystem capacity and performance management process, delivering expert reports and timely historical analysis, for example:

  • Automatic daily (24 Hour) analysis of Sysplex wide workload (On-Line TP & Batch) I/O response times
  • Systematic intelligent alerts of early performance variances with exact occurrence time indicators
  • Identification of I/O performance hot-spots with DASD volume and data set level granularity
  • Performance trending at DFSMS Storage Group, Subsystem LCU and DASD volume level
  • DR (E.g. PPRC) simulations to prevent data loss and forecast Data Centre failover scenarios
  • I/O subsystem WLM indicators to determine exactly what impacts performance objectives
  • Full FICON channels and zHPF analysis, incorporating typical I/O throughput indicators
  • HyperPAV and associated LCU indicators to easily balance volumes, optimizing PAV alias allocation
  • Performance monitoring and balancing via intelligent LCU, SSID and I/O analytics
  • DASD capacity usage via DCOLLECT data, comparing assigned vs. allocated vs. actual disk utilization
  • EADM supports entry-level several LPAR and complex multiple CPC/LPAR System z configurations

A well provisioned and performing System z I/O subsystem is of vital importance for safeguarding today’s ever increasing storage requirements of mission critical business applications. A poorly performing I/O subsystem will generate unnecessary and extra CPU overhead, with potential and tangible TCO impact, in conjunction with potential business impact. Although the advances of the System z server and underlying DASD I/O subsystem can compensate for many application code or data placement issues, the fundamental concepts of analysing and tuning the I/O subsystem remain.

Therefore the savvy and proactive System z customer will safeguard that they find a solution to deliver optimal DASD I/O performance. Without doubt, such an analysis could be performed by a highly-skilled individual, but today’s 21st Century world demands a hybrid of technical and commercial skills. Therefore a solution that incorporates the diagnostic knowledge of the most highly trained technician, performs intelligent analytics on a plethora of Sysplex wide performance data sources and presents the information required, is one that will deliver benefit each and every day. EADM is an example of such a solution, delivering demonstrable System z TCO optimization benefits, while safeguarding a short-term ROI, with simple deployment and resource utilization attributes.

Mainframe Virtual Tape: Tape On Disk; But For How Long?

By definition, a Virtual Tape Library (VTL) solution uses a disk cache to store tape data files, but for how long is this data retained on disk? Is it minutes, hours, days, weeks or indefinitely? Only business requirements can dictate the time period tape data is stored on disk, which will influence the VTL solution chosen. We will return to this pivotal question later in the article…

Some might say (for some reason I’m thinking of an Oasis lyric) that Mainframe Virtual Tape choice is as simple as black and white; or blue (IBM) and red (Oracle AKA StorageTek). Hmmm, clearly this is not the case; there are grey areas, but moreover, there are many colours to choose from. For sure we must recognize the innovation in tape technologies by StorageTek, delivering the 1st Automated Tape Library (ATL, NearLine) and IBM with the first Virtual Tape Library (VTL, VTS), naming but a few. Of course, now I recall, IBM delivered VTS in the mid-1990’s, about the same time as that Oasis song!

There is also that age old debate as to whether tape is dead or not and the best compromise always seems to be, “we’ll have to agree to disagree”, depending upon your viewpoint. Does it matter?

I also recall the early 1990’s, where Mainframe disk was proprietary and based upon 1:1 mapping, a physical disk was the addressable DASD volume. The promise of Iceberg (AKA SVA) from StorageTek and the delivery of Symmetrix by EMC changed this status quo, and so the Mainframe world adopted logical to physical mapping for disk storage, via RAID technologies, with Just a Bunch Of Disks (JBOD). This was significant, as the acquisition cost per MB for Mainframe disk was ~£5 (yes that’s right, I’m a Brit, so GBP), and today, maybe ~£0.01 (1 Penny) per MB, or ~£10 per GB, and getting lower each year. So yes, tape is always less expensive when compared with disk, by significant magnitudes, but the affordability of disk indicates that it can now be seriously considered, for backup and archive data.

As with any technology decision, it should be business requirements that drive the solution chosen, and not an allegiance to a storage media type, tape or disk, or a long time Mainframe tape vendor, IBM or Oracle. Ultimately there is only one thing that differentiates one business from another, and that is the data itself, stored in whatever format, databases, application code libraries, batch flat files, et al. Therefore the cost of storage is somewhat arbitrary; it’s the value of the business data that we should consider, while recognizing capital expenditure and TCO running costs.

The 21st century business seemingly requires near 24*7 service availability and if that business deploys a zSeries (~zero downtime) Mainframe server, I guess we can presume that said business requires near 24*7 data availability. We then must consider Business Continuity and associated Disaster Recovery metrics, which are measured by the Recovery Time Objective (RTO) and Recovery Point Objective (RPO). Ultimately these RTO and RPO values will dictate the required Backup & Recovery and Archive solutions required, where Recovery (time) is the most important factor!

When was the last time you performed a completely successful Disaster Recovery test from a secondary (physical tape, virtual tape disk) copy of data and was the Recovery Time Objective (RTO) satisfied? Was this a complete workload test, where you included on-line, batch and backup (VTL) testing?

From a data categorization viewpoint, industry analysts tell us, if we didn’t know this fact ourselves, that the majority of Mission Critical data is stored in database structures. If we associate other data types with said databases, application code to process the data, policies to manage and safeguard the data and processes to secure and preserve the data, then I guess we have many instances of Mission Critical data.

As the cost of disk has reduced, so has the cost of network bandwidth, so it’s not uncommon for Mainframe customers to mirror/replicate their data between Geographically Dispersed (E.g. GDPS, GDDR) data centres. They deploy this significant investment solution because they have a requirement for near 24*7 service and thus data availability. Therefore their RTO is likely measured in Minutes (E.g. ~5-15), not because the underlying technology can’t deliver a near instantaneous switch, but because the data needs a Point of Consistency (PoC), and this is the “latency time” for delivering a meaningful RPO (E.g. Pre Batch, Post Batch). Mission Critical databases need to establish a Quiesce PoC, to safeguard data consistency.

If the Mainframe user implements this high availability solution for their primary data copy, why wouldn’t they do this for their secondary (E.g. Backup, Archive) data copy? Ultimately there is generally a hierarchy of RTO and RPO objectives, associated with physical and logical failures. A mirrored disk environment only provides rapid recovery (RTO) for a physical component failure, while a logical data failure will manifest itself for all data copies in the mirror topology. Therefore we always have to consider what is our last line of defence for data recovery; typically a secondary backup data copy. Clearly recovering data from a backup, even a disk based backup, generates a significantly higher recovery (RTO) elapsed time. We might also consider data consistency for this backup data copy; namely, has the backup data been completely destaged/written to the target storage device, tape or disk? Of course, if we don’t have a good backup, we can’t recover the data!

OK, we have come full circle to that original question, by definition, a Virtual Tape Library (VTL) solution uses a disk cache to store tape data files, but how long is this data retained on disk? Is it minutes, hours, days, weeks or indefinitely? Only business requirements can dictate the time period tape data is stored on disk, which will influence the VTL solution chosen.

VTL solutions can be classified as either traditional or tapeless. Traditional is a combination of physical drives and cartridge media in an ATL with a Virtual Tape disk cache (usually proprietary) that is destaged periodically to physical cartridge media, where the primary suppliers are of course IBM with their TS7700 family and Oracle with their VSM offering, while Fujitsu have their CentricStor offering. Tapeless VTL solutions are typically FICON/ESCON channel attached appliances to a back-end disk cache (typically IP, FC or iSCSI), where the tape data is permanently stored on disk. Because the back-end disk cache can be any disk subsystem, within reason, the disk acquisition cost is optimized, because it’s classified as Enterprise/Distributed disk, as opposed to Mainframe disk.

There are many suppliers of tapeless VTL solutions, but the primary vendors are EMC with their Disk Library for Mainframe (DLm) offering and HDS with a several layered approach including LUMINEX Gateways and HDS disk. EMC recently acquired Bus-Tech, where DLm is an OEM of the Bus-Tech MDL solution, still available via the EMC Select option. IBM, Oracle and Fujitsu also offer tapeless VTL solutions, as and if required, but generally they’re deployed in combination with their traditional physical tape based VTL/ATL offerings. There are also software options, IBM Virtual Tape Facility for Mainframe (VTFM) and CA Vtape, where these software solutions deploy higher cost Mainframe disk as the virtual tape cache.

The majority of VTL solutions benefit from data dedupe functionality, where IBM incorporates their ProtecTIER technology, EMC and HDS incorporate DataDomain technology, while Oracle does not currently support Mainframe dedupe, incorporating a Virtual Library Extension (VLE) as a second tier of VTL disk storage. Ultimately dedupe delivers significant (~10-20:1) data reduction benefits and arguably is mandatory for any large scale Mainframe VTL implementation.

Each and every business must draw their own conclusions for VTL implementations and whether they should be tapeless or not. Most Mainframe users have experienced the benefits of mirrored disk (I.E. IBM PPRC, EMC SRDF, HDS TrueCopy, XRC, et al) and have implemented high-availability solutions with a short-term RTO for physical failures. However, only that business can consider how robust their data recovery processes are for logical data failures, and in the worst case scenario, restoring an entire Mission Critical application from a backup copy. The driving factor for this type of recovery is RTO and where is that “last chance” backup data copy stored, tape or disk storage media, and local, remote or 3rd party data centre?

Just as the business must establish a 1st level RPO and associated RTO for their Mission Critical database structures, typically via a quiesce Point of Consistency (PoC), they must do the same for their 2nd level backup data. If a VTL destages data from disk cache to physical tape, then the time required to create the final physical tape copy will influence the associated RTO, and potentially how much data loss might occur. For the avoidance of doubt, if backup data cannot be detstaged to physical tape, then the backup has not been completed, and is unusable. Ultimately data loss is not acceptable, whether a database, or a backup copy. So what steps can the Mainframe user take to minimize this risk?

Because tapeless VTL solutions can attach to any disk subsystem, within reason, IT departments generally have their preferred disk supplier and associated processes. Data dedupe significantly reduces disk acquisition cost and associated network transmission costs, while the functional abilities of disk subsystems are typically higher (I.E. Mirroring, Replication) and more robust when compared with tape subsystems.

If the typical Mainframe user has confidence in their disk mirroring solution for physical failure scenarios, generally associated with the primary copy of Mission Critical data, it seems a logical conclusion that they could extend this modus operandi to secondary (E.g. Backup) copies, eradicating if not eliminating any data loss concerns.

If the Mainframe user deploys EMC Symmetrix (VMAX) for disk data, they could deploy the DLm 8000 VTL to benefit from SRDF/GDDR functionality; if they deploy HDS USP, they could deploy LUMINEX gateways to benefit from TrueCopy functionality, and so on. There are many options available, when the front-end host connectivity (E.g. FICON, virtual tape drives) is separated from the back-end data store (E.g. IP/FC/iSCSI disk).

Additionally, the smaller Mainframe user that cannot afford hot/warm site recovery facilities can also consider different options for Disaster Recovery solutions. For example, they could deploy a tapeless VTL in their only data centre, benefitting from data dedupe for data reduction, transmitting their backup/archive data via IP (or other network transmission) into a 3rd party suppliers facilities, duplicating the VTL and disk subsystems to store the data. They can then modify their Disaster Recovery (DR) procedures to invoke DR as and when required, at that point connecting the 3rd party Mainframe resources to the VTL and data recovery can start immediately. Therefore the traditional off-site DR test at 3rd party provider premises increases in efficiency, while backup data availability is not reliant on the Ford Transit Access Method (FTAM)!

So, how long should secondary copies of Mission Critical data be retained on Virtual Tape disk? Is it minutes, hours, days, weeks or indefinitely? The jury might still be out, but to deliver near 24*7 data availability, for both logical and physical failure scenarios, seemingly at least one secondary copy of Mission Critical data should be retained indefinitely on Virtual Tape disk…

Extended Address Volumes (EAV): Pros & Cons

It wasn’t too long ago that the maximum size of a 3390 DASD volume was ~54 GB (65,520 Cylinders) via the 3390-54.  Then with the release of z/OS 1.10, Extended Address Volumes (EAV) were introduced, and a ~400% increase in single device capacity was delivered @ 223 GB (262,668 Cylinders)!  Surely enough storage capacity for anybody?

Of course, we all know that 21st Century data requirements are significant, and so the release of z/OS 1.13 (z/OS 1.12 and PTFs) has delivered another ~400% increase, with a single device capacity of 1 TB (~1.182 Million Cylinders).  However, let’s not forget, data storage capacity can increase by ~20%+ per annum, I guess it won’t be too long before we see another 400%+ increase in size, ~4 TB+…

EAV implementation relieves disk capacity constraints and allows storage growth without adding more devices.  In today’s world of TCO optimization and a utopia of very short-term ROI, EAV usage will reduce TCO, primarily personnel and environmental (E.g. Power, Cooling, Floor Space) related.  Potentially the ability to manage more data with fewer DASD volumes simplifies the Storage Administration process, therefore increasing the number of TB managed by each technician.  Typically, additional capacity (EAV) can be added dynamically, increasing DASD volume capacity online via the Dynamic Volume Expansion (DVE) function.

Theoretically (as per current architectural constraints) a 3390 EAV can grow to 225 TB; the realm of possibility exists!

The pros of EAV implementation seem obvious, a significant capacity increase in a single footprint, easy implementation, with demonstrable TCO benefits; but is all that glisters always gold?

Learning from history is always a good thing and if we consider the challenges of adopting the 3390-9/27/54 device, did we encounter any capacity optimization issues?  As a single device increases in size, device occupancy might become a challenge.  For example, 90% occupancy of a 3390-54 @ 54 GB is ~48.5 GB, or put another way, ~5.4 GB is allocated but never used.  So if we apply the same metric to a 1 TB device, you guessed it, ~100 GB is allocated and never used…

So what they say.  Indeed the separation of the physical and logical device eliminates any physical space utilization considerations, but what about the number of data sets and more importantly extents on that EAV or even 3390-54 DASD volume?  An issue that has plagued many Mainframe installations is disk fragmentation, as no matter how big a DASD volume, sometimes successful data set allocation is dependent upon sufficient contiguous extents to satisfy primary allocation or secondary extension.

At first glance, the process of defragmentation is very simple, DFSMSdss DEFRAG, FDR/CPK COMPAKTOR, et al, but typically these processes require minimal data set allocation activity and are batch orientated. DASD enqueue time is a consideration, as these traditional Mainframe defrag solutions can generate significant enqueue activity for the VTOC and data sets alike. Can the 21st Century business that requires near 24*7 data availability allocate sufficient time (E.g. minimal processing window) to perform such manual defragmentation activities? If only defragmentation could be transparent, automated and dynamic…

RealTime Defrag (RTD) is such an option that deploys a multi-faceted approach to delivering “on-line defrag”:

  1. Release – Release allocated but unused space for all data set types
  2. Combine – Combine extents, reducing the number of allocated extents for optimized performance and SE37 abend eradication
  3. Defrag – Reorganize data sets into contiguous groups, increasing size of free extents, optimizing performance and SB37 abend eradication

In conclusion, EAV deployment can only be a good thing, delivering demonstrable TCO benefits in the form of dramatic single-footprint (I.E. Disk Subsystem) capacity increases.  RealTime Defrag can also increase service availability, eradicating the requirement for manual and batch orientated defrag activities, while safeguarding that installed disk capacity is optimized, EAV or not.