The Software Defined Mainframe (SDM): An Alternative Approach?

Some consider the IBM Mainframe to be the last bastion of proprietary computing platforms, for obvious reasons, namely the CPU server architecture and the single manufacturer, IBM.  The historical and legacy ability of said IBM Mainframe to transform Data Processing into Information Technology and still participating in the Digital Era is without doubt.  However, for many, the complicated and perceived ultra-expensive world of software pricing generate concern, largely based upon Fear, Uncertainty and Doubt (FUD), which might have generated years if not decades of under investment for those organizations with an IBM Mainframe.

Having worked with the IBM Mainframe for 35+ years, I have gained a knowledge that allows cost optimization and contemporaneous usability, which given the importance of the IBM Mainframe platform to IBM from a revenue viewpoint, will safeguard that the IBM Mainframe will have a long future.  However, the last decade or so has seen a rapid evolution in Open Source, DevOps, Enterprise Class Support for Distributed Platforms, Mobile and Cloud computing, et al, potentially generating an opportunity for the global IBM Mainframe user base to once again consider the platforms value proposition…

Let’s consider this server platform choice from a business viewpoint.  On the one hand, there are the well versed market statements, where 80%+ of corporate data resides or originates from IBM Mainframes, while IBM Mainframes enable 70%+ of global commercial transactions, et al.  In recent times there are global businesses, leveraging from the cloud or Linux Open Source technologies, to run their business.  For instance, Netflix reportedly runs its media on demand business via the Amazon Web Services (AWS) cloud, while said platform is facilitating a Data Centre reduction of 34 to 4 for General Electric (GE).  There are many other such “early adopters” of this commodity infrastructure provision opportunity, including Capital One, Hertz and Juniper, naming but a few.

Quite simply, the power of Mobile processors, primarily ARM and supporting software ecosystem empower each and every potential consumer with a palm sized smart computing platform, while the power and supporting software ecosystem of x86 processors, generate an environment for each and every global business, mature or not even launched, to deliver an eminently usable and scalable IT Infrastructure for their business model.

Of course, the IBM Mainframe can do this, it always has been at the forefront of IT architectures and always will be, but for the “naysayers”, its perceived high acquisition and running costs are always an easy target.  As somebody much cleverer than I once said, timing is everything, and we’re now encountering a “golden sunset” for those Mainframe Baby Boomers, just like myself, that will retire in the next decade or so.  Recently I was talking with a large IBM Mainframe customer, who stated “we’re going to lose 1500 years of IBM Mainframe experience in the next 10 years, how can you replace that resource easily”?  Let’s just think about that metric; ~50 people with an average of ~30 years’ experience, but of course, they will all retire in a short time frame!  You must draw your own conclusions as to that conundrum, how do you replace that level of experience?

In conclusion, no matter what IBM deliver from an IBM system z viewpoint, there is no substitute for experience and skill and no company, especially IBM has an answer to skills provision.  In the last 10-20 years, Outsourcing or Managed Services has provided an alternative approach for some companies, but even this option has finite resource.  If we consider the CFO viewpoint, where the bottom line is the only true financial metric, it’s easy to envisage a situation where many companies consider an alternative to the IBM Mainframe platform, both from a cost and viability viewpoint.  As a lifelong IBM Mainframe champion and as previously stated, there will always be a solution for safeguarding the longevity and viability of the IBM mainframe for any Medium to Large sized business.  However, now is the time to act, embrace the new Open Source, DevOps and Hybrid Cloud opportunities, to transition from a Baby Boomer to Millennial Mainframe workforce!

Is there an alternative approach and what is the Software Defined Mainframe (SDM)?

Put simply, SDM is a technology from LzLabs enabling the migration of mission-critical workloads from legacy IBM Mainframe environments to x86 Linux platforms.  Put another way, LzLabs have developed a managed software container that provides enterprises with a viable way to lift and shift applications from IBM Mainframes into Red Hat Linux or Cloud environments.  From my first glance, the primary keyword here is container; there was a time where the term container might have been foreign to the System z Mainframe, but with LinuxONE and zVM, Docker and KVM are now commonplace and accepted functions.  The primary considerations for any platform migration would include:

  • Seamless Migration: The LzLabs Software Defined Mainframe (SDM) ensures the key capabilities of screen handling, transaction management, recovery and concurrency are preserved without changes to the applications. LzOnline is capable of processing thousands of online customer transactions per second using commercial off-the-shelf hardware.
  • Major Subsystem Compatibility: The LzLabs Software Defined Mainframe (SDM) safeguards 100% compatibility with existing job control syntax, and also enables job submission via network connected nodes that support conventional job entry protocols. LzBatch provides a full spool capability that enables output to be managed and routed in familiar ways. Use of conventional job submission models, with standard job control, also means existing batch scheduling can operate with minimal changes.  Other solutions include LzRelational for Relational Database Management System (RDBMS) support and LzSecure, an authentication and authorization subsystem using security rules migrated from the incumbent IBM Mainframe platform.
  • Application Code Stability: An innovative approach that avoids the requirement to recompile or rewrite legacy COBOL or PLI application source code. Leveraging from functionality delivered by Cobol-IT and Eranea, a simple and straightforward process to convert and potentially modernize existing application source code to Java.

The realm of possibility exists and there are likely to be a number of existing IBM Mainframe users that find themselves with challenges, whether retiring workforce or back level application code based.  The Software Defined Mainframe (SDM) solution provides them with a potential option of simplifying a transition process, with seemingly minimal risk, while eradicating any significant dependence on another Distributed Systems platform supplier, during the arduous application source and data migration process.

From my viewpoint, I hope that this innovative LzLabs approach is a wake-up call for IBM themselves, who continue to deliver a strategic Enterprise Class System z platform, with all of its long term challenges, primarily cost based and the intricate and over complicated sub-capacity software pricing structure.  Without doubt, any new workload can easily be accommodated for low cost via the recent LinuxONE offering, but somewhere along the line, IBM perhaps overlooked a number of Small to Medium sized customers, who once might have used entry level or plug-compatible platforms, including and not limited to S/390 Integrated Server, MP3000, FLEX-ES zFrame, T3 Liberty, et al.  Equally from a dispassionate viewpoint, I welcome the competition of the LzLabs Software Defined Mainframe (SDM) offering and I would encourage all CIO and indeed other CxO personnel to consider the merits of this solution.

z/VM: The Most Flexible System z Operating System?

When considering IBM System z Operating Systems, typically z/OS is considered to be the flagship product, delivering best-of-breed features, including but not limited to, performance, reliability, availability, security, capacity, et al.  Therefore it easy to overlook the flexible virtualization capabilities of z/VM, delivering the architectural foundation for the increasingly attractive LinuxONE offering.  Quite simply, the fundamental strength of z/VM is an ability for hundreds if not thousands of virtual machines to share system resources with high levels of resource utilization.  The recent release of z/VM V6.4 provides even greater levels of scalability, security, resource optimization and efficiency to create opportunities for cost savings, while providing a robust foundation for cloud computing on z Systems servers.

Major technical highlights of z/VM 6.4 include:

  • Simultaneous MultiThreading (SMT) technology extends per-processor, core capacity growth beyond single-thread performance for Linux on z Systems running on an IBM Integrated Facility for Linux (IFL) specialty engine on a z13, z13s or LinuxONE server.
  • Enhanced Real & Guest Virtual Memory Support. The maximum amount of real storage supported by z/VM increases from 1 to 2 TB, whereas maximum supported virtual memory for a single guest remains at 1 TB.  Maintaining the virtual to real memory allocation, doubling the real memory used, results in doubling the active virtual memory that can be used effectively.  This virtual memory can be sourced from an increased number of virtual machines and/or larger virtual machines, delivering greater leverage of white space.
  • Surplus CPU Power Distribution Improvement. Virtual machines not utilizing all of their entitled CPU power, determined by their share setting, generate “surplus CPU power.”  This surplus CPU resource can be distributed to other virtual machines in proportion to their share settings, managed independently across virtual machines for each processor type, namely General Purpose (GP), zIIP, IFL, et al.
  • Guest Large Page Support. z/VM 6.4 now includes support for the Enhanced Dynamic Address Translation (DAT), allowing a guest machine to exploit large (1 MB) pages.  Larger page sizes decrease the amount of guest memory needed for DAT tables, therefore decreasing the overhead required to perform address translation.  In all cases, guest memory is mapped into 4 KB pages at the host level.

From a Linux environment viewpoint, z/VM V6.4 is a supported environment using IBM Dynamic Partition Manager for Linux-only systems with SCSI storage.  This simplifies system administration tasks for a more positive experience by those with limited System z Mainframe administration skills.  IBM Wave Version 1 Release 2 is now included in z/VM V6.4 as a priced feature, simplifying the task of administering a z/VM environment.  Using Dynamic Partition Manager, an inexperienced z/VM technician can create a z/VM partition in ~10 Minutes!

Supporting today’s agile application development and hybrid cloud implementations, z/VM and LinuxONE virtual servers can be natively managed using OpenStack open cloud architecture-based interfaces IBM OpenStack for z Systems.  OpenStack is an Infrastructure as-a Service (IaaS) cloud computing open source project, managed by the OpenStack Foundation.  With the adoption of OpenStack as part of the IBM cloud strategy, z/VM drivers provide OpenStack enablement for z/VM virtual machines running Linux on z Systems and LinuxONE.  Open standards such as OpenStack enable enterprises to be more agile, resolving potential issues such as vendor lock-in, technical expert recruitment, long application development cycles and security challenges.

The next evolution of z/VM cloud enablement technology is the OpenStack Liberty based Cloud Management Appliance (CMA), available for z/VM 6.3 and 6.4.  z/VM installations wanting to deploy cloud based solutions beyond Cloud Manager with OpenStack for z Systems, should utilize the cloud enablement support provided by the z/VM OpenStack Liberty based CMA.  This OpenStack Liberty based Cloud Management Appliance (CMA) replaces the IBM Cloud Manager with OpenStack for System z solution, withdrawn from marketing in June 2016.

The z/VM hypervisor extends the capabilities of z Systems and LinuxONE environments from the standpoint of sharing hardware assets, virtualization facilities and communication resources.  In conjunction with IBM Wave, z/VM makes it easier to derive maximum value from largescale virtual server hosting on z Systems and LinuxONE.  These benefits includes software and personnel savings, operational efficiency, power savings and optimal qualities of service.  The z/VM virtualization technology is designed to enable organizations to run hundreds to thousands of Linux servers on a single System z Mainframe footprint, alongside other System z Operating Systems, such as z/OS, z/VSE, or as a large-scale enterprise LinuxONE server solution.

Advanced virtualization features like multisystem virtualization and live guest relocation with z Systems, LinuxONE, z/VM, and Linux on z Systems or LinuxONE help to provide an efficient infrastructure for deploying private clouds to support workloads that scale both horizontally and vertically at a low total cost of ownership.

Although some might consider z/OS to be the flagship IBM system z Mainframe Operating System, arguably z/VM is the industry standard for optimal resource virtualization for numerous Operating System deployments.

zAPI: System z Deployment Into The API Economy

Having been in the IT industry for 35+ years, I have always fully embraced and learned new technologies, to find strategic solutions for business challenges.  Obviously, starting in 1980, my heritage is IBM Mainframe, supplemented by UNIX, Wintel and Linux along the way.  Each and every platform has its merits, and during this 35+ year period, I have attended many conferences, for all platforms.  What I have noticed during this period is the attendance of many IBM Mainframe CIO, CTO or Chief Architect individuals at non-IBM Mainframe conferences, but very few, if any, equivalent Distributed Systems personnel at IBM Mainframe conferences.

I’m always surprised and disappointed to hear about organizations talking about decommissioning the IBM Mainframe platform, with tenuous reasons, based on Distributed Systems FUD messaging, as opposed to their own business requirements.  Thankfully these scenarios are decreasing over the years.  Presumably if an organization decides to migrate from one Distributed Systems platform to another or perhaps the Cloud, they do at least attend the relevant platform conferences to make an informed decision.

Over the last 25 years or so, IBM themselves compete with differing divisions and options, whether UNIX (AIX), System z and in recent years, Linux on z Systems, most notably with the LinuxONE launch at LinuxCon 2015.  One would hope that the world’s key IT decision makers might attend LinuxCon with an open mind and learn more about the System z Mainframe?

A ridiculous notion might be that one server platform technology can satisfy a 21st Century organizations IT infrastructure for their mission critical services.  Clearly that has not been the case since the advent of Client Server and today’s emerging Digital business requires an infrastructure of multiple layers, where the underlying server technology is somewhat arbitrary, and arguably a commodity resource.  Conversely the underlying data and associated applications differentiate one business from another, delivering business value and competitive edge.

Let’s take some time to consider this IT architecture design, which very quickly dismisses any notion that one server technology delivers all business requirements:

Such an architecture diagram does not impose any technology decisions.  Conversely it explores the “data journey” from access or creation, via Systems of Engagement (SoE) to eventual storage within Systems of Record (SOR) data repositories (I.E. Database).  Some might say it was forever thus, with the exception of the Multi-Channel SDK’s & API’s layer, where the savvy organizations will embrace DevOps, Hybrid Cloud and connectivity (I.E. API, SDK) solutions, seamlessly integrating modern agile applications, with that most valuable business asset, Systems of Record (SoR) data.

Today’s Application Developer doesn’t need to concern themselves as to the platform used for their DevOps application processes, the Transaction Server or indeed the Database Server.  Sure, several decades ago, maybe even a decade ago, application code was deeply associated if not confined to a specific CPU server architecture.  Clearly that is no longer the case.  Any organization that still thinks in this legacy manner, is behind the times, and this is unfortunate.  Associating such outdated thinking with the System z Mainframe is arguably careless, and not a reason for dismissing an incumbent System z platform, or not considering a System z platform in the future.

Arguably the greatest strengths of today’s System z IBM Mainframe, currently packaged as the z13 or LinuxONE, are as a Database Server (E.g. DB2), Transaction Server (E.g. CICS, WebSphere Application Server) and Security Server (E.g. ACF2, RACF, Top Secret).  From a LinuxONE viewpoint, it’s just another server, capable of processing all of the latest strategic Open Source and Commercial Off The Shelf (COTS) Cloud, Database and Application solutions, while benefitting from the unparalleled System z Quality of Service (QoS) attributes.

However, for those organizations already deploying a System z Mainframe, its greatest perceived issue is TCO.  Without doubt the convoluted and intricate Workload Licence Charges (WLC) are unnecessarily complicated and perceived as being very expensive.  Optimizing these costs requires a modicum of expertise, safeguarding that the best contractual conditions are negotiated.  However, I encounter the same complexities with Distributed Systems platforms, where software license costs can spiral out of control for significant CPU capacity deployments.  Whatever platform is deployed, System z Mainframe or Distributed System, unless the business has the requisite skills in place, technical and commercial, to safeguard the lowest cost possible, commercial ISV suppliers will take advantage of such an oversight.

I’m not advocating any server technology, System z Mainframe, Distributed System or Cloud, as each resource has its merits, depending on the business requirement.  However, today’s 21st Century organization must enable new business channels by leveraging from and arguably enable new business channels by monetizing their Systems of Record (SoR) enterprise data.

Today, organizations need to consider an API Economy, where they expose their internal digital business assets or services in the form of Web API services to external 3rd party partners and consumers, with an overall objective of unlocking increased business value via the creation of new assets.  Such an API Economy will require integration of Transaction and Data resources, specifically:

  • Centrally manage the consumption of enterprise wide business logic, for both Systems of Record (SoR) & Systems of Engagement (SoE)
  • Extend business (E.g. Product, Brand) reach from Systems of Record (SoR), incorporation Systems of Engagement (SoE)

Previously I wrote about How to Connect Mobile Workloads to System z, detailing the conceptual steps required to expose existing SoR data assets with SoE transaction services, via z/OS Connect.  For a fully integrated end-to-end integrated solution, we must also consider the Application Programming Interfaces (API), being the digital glue that seamlessly links applications, services and systems together.

IBM API Connect is a solution that manages the API lifecycle for both On-Premises and Cloud environments.  IBM API Connect delivers capabilities to Create, Run, Manage & Secure API resources and Microservices.  It also enables you to rapidly deploy and simplify API administration, across the organization.

API Connect can be deployed On-Premises via Linux on z Systems, in the cloud (E.g. Bluemix), as well as all other popular Distributed Systems.  Once again, the main message is that the chosen server is arbitrary, System z Mainframe, Distributed System or Cloud.  The server should be considered as a commodity resource, leveraging from existing business logic (I.E. SoE) and data (I.E. SoR), while evolving existing Application Lifecycle Management (E.g. Agile, API Economy, DevOps) is the key.

My final observation is the Mainframe Baby Boomer (E.g. Born ~1960) versus the Millennial (E.g. Born ~1995) technical personnel resource.  Without doubt, there are significant differences in their approach to application programming, but only one resource, namely the Baby Boomer knows the business really well.  I think these folks have the ability to learn another 21st Century programming language, as well as COBOL, but perhaps their best attribute is an analytical role, especially for the integration of SoE and SoR layers.  Working very closely with Millennial technical resources, delivering the new Application (I.E. App, API) resources, the Mainframe Baby Boomer still has something valuable to offer in their final employment years.  For the avoidance of doubt, still delivering value from an analytical viewpoint, while transferring their skills and knowledge to their successors, namely the Millennial.

In conclusion, dismissing any server technology for Fear, Uncertainty or Doubt (FUD) reasons, is an unproductive and ridiculous notion.  More importantly, what might your business lose in opportunity, spending several years or more, migrating from one platform to another, while your competitors are embracing the Digital Age with an API Economy approach, delivering more value from their existing business SoE (transactions) and SoR (data) assets?

System z Meets Open Source Linux

Recently IBM launched their LinuxONE offering, packaged in the most powerful and secure enterprise server, namely System z, designed for the new application economy and hybrid cloud era. Although IBM has provided Linux support for the Mainframe server since 2000, this LinuxONE packaging promises a unified portfolio of hardware, software and services solutions for mission-critical Linux applications.

To supplement the existing SUSE and Red Hat support, Ubuntu is included, along with Open Source enablement, including Apache Spark, Chef, Docker, MariaDB, MongoDB, Node.js and PostgreSQL, endeavouring to provide clients with choice and flexibility for hybrid cloud deployments.

From a big picture viewpoint, LinuxONE can be summarised as:

  • Linux Your Way: Choose the Linux environment and tools for your organization
  • Linux Without Limits: Benefit from Enterprise Class Linux support
  • Linux Without Risk: Safeguard business applications with the secure and resilient System z Server

The LinuxONE Systems are classified as Emperor and Rockhopper, loosely classified as High-End and Entry-Level System z servers. LinuxONE Emperor delivers ultimate flexibility, scalability, performance and security trust for mission-critical applications. Scalability is as per the latest z13 server, allowing growth to handle the most demanding workloads. LinuxONE Rockhopper delivers the entry point into the LinuxONE family, offering all the same great capabilities and value, with the flexibility of a smaller package.

LinuxONE includes a choice of hypervisors and management tools, namely KVM for LinuxONE and/or IBM z/VM. This virtualization capability claims support for up to 8000 virtual servers (several thousand containers) in a single System z server footprint, allowing for parallel processing of Test, Development and Production environments. Additionally, new servers and containers can be initialized and running in minutes, with automated resource provisioning and reallocation in seconds.

From a performance viewpoint, System z metrics apply; fast CPU processors, significant I/O capability and 10 TB Memory, all delivering consistent and predictable sub-second response times for thousands of users. A reported capability of 30 Billion RESTful web transaction per day, with ~500,000 database read/write operations per second.

The LinuxONE offering is also a key component of the IBM Cloud, Analytics, Mobile & Security (CAMS) framework:

  • Cloud: An agile and trusted cloud infrastructure to meet new business demands with greater efficiency and lower costs for IT service delivery. Example cloud usage includes Database, Enterprise Systems of Record and Hybrid Platform cloud platforms.
  • Analytics: Flexible, resilient, high performance business and operational analytics for Business Intelligence, Big Data Insights and Operational Analytics for intelligent and continuous business availability.
  • Mobile: Build a premier mobile solution for your business to deliver the best possible experience for your clients, employees and partners alike. Facilitate agile development and deployment of mobile applications, with secure end-to-end mobile transactions, personalized via integrated data analytics.
  • Security: System z has been associated with the highest EAL5+ Common Criteria certification for many years, safeguarding mission-critical data from cradle-to-grave. Security functions such as full data encryption, cryptographic processors and end-to-end security, combined with the unmatched reliability and availability of the System z server, safeguarding mission-critical data and services are fully protected and available.

Finally and a key point, LinuxONE promises TCO optimization with pricing your way. A straightforward menu of pricing options include:

  • A fixed monthly cost usage model for hardware and software resources
  • A per core software pricing model, with 30 days notice for cancellation or resource change
  • A 36 month rental option, with buy/replace/return options at contract end

In theory, LinuxONE could be perceived as just a tweak of existing System z Linux options, including the most recent z13 server, Ubuntu and Open Source support. What has changed are user requirements, the requirement for flexible and agile computing, where Cloud, Analytics, Mobile and Security dominate many CIO agendas.

It is my hope that each and every CIO, System z literate or not, at least considers the LinuxONE platform for their mission-critical enterprise workload, as from a simplistic viewpoint, LinuxONE is just another ubiquitous black server box; or is it…

IFL – A Cost Efficient zSeries Platform?

In September 2000, IBM introduced the Integrated Facility for Linux (IFL) processor, a specialty engine for and some might say dedicated to running the Linux Operating System.  At the time of this announcement, companion software named S/390 Virtual Image Facility for Linux was introduced to assist in the rapid deployment of IFL configurations, especially for non-Mainframe personnel.  However, this product was quickly discontinued, in favour of the standard z/VM Operating System, which is not difficult to learn and can accommodate hundreds if not thousands of zLinux images.

Today, the IFL is still a processor dedicated to Linux workloads on IBM System z servers.  The IFL is supported by z/VM virtualization and the Linux operating system.  The IFL cannot run other IBM operating systems.  The competitively priced IFL processor is a CPU capacity enabler, exclusively for Linux workloads.  Linux deployment (I.E. SUSE & Red Hat) on IFL’s can reduce expenses in the areas of operational efforts, energy, floor space and especially software.

The IFL provides the following functions and benefits:

  • The IBM Enterprise Linux Server is a dedicated System z Linux server, comprised of only IFL processors
  • No additional IBM software charges for traditional (E.g. z/OS, CICS, DB2, WebSphere, et al) environment
  • Performance improvement for Linux workloads with each successive generation of IFL and System z technology
  • Linux workload on the IFL does not result in increased IBM software charges for traditional System z operating systems and middleware
  • Same functionality as a General Purpose processor on a System z server
  • HiperSockets can be used for communication between Linux images, or Linux and other operating system images on the same System z system
  • z/VM virtualization and most IBM Linux middleware products, plus most vendor software products are priced per processor (core) according to the System z IBM International Program License Agreement (IPLA).  IPLA products have a one-time-charge (OTC) and an annual (optional) maintenance charge, called Subscription & Support
  • Supported by the current z/VM virtualization and IBM Wave for z/VM software versions
  • Always a full capacity processor, independent of the capacity of the other processors in the server
  • Orderable as a System z hardware feature. The number of orderable IFL features varies by the server model and configuration
  • Designed to operate asynchronously with other General Purpose processors
  • Managed by PR/SM in logical partition with dedicated or shared processors. The implementation of an IFL requires a Logical Partition (LPAR) definition, where following normal LPAR activation procedure, LPAR defined with an IFL cannot be shared with a general purpose processor.

There will always be the debate as to which processor and associated server type (E.g. x86, POWER, SPARC) is the most cost efficient, but there is no doubt that the ability to accommodate hundreds if not thousands of zLinux instances in one zServer environmental (E.g. Power, Cooling, Floor Space, et al) friendly footprint, with software pricing per core is worthy of consideration.

Adoption for zLinux has been steady and especially in the emerging territories where it’s not unusual for zSeries deployments to be totally zLinux (I.E. IBM Enterprise Linux Server) based.  Moreover, the majority of large and traditional IBM Mainframe users (I.E. z/OS) have installed at least one IFL, if only to evaluate the z/VM and zLinux offering.  Many have deployed the IFL and associated zLinux solution for business requirements.

Therefore, if one of the major cost benefit features of IFL is optimized software costs; can the IFL processor be considered for other workloads, originating from the traditional zSeries (I.E. z/OS) environments?

Proximal Systems Corporation (PSC) is a company with a solution that transparently offloads data processing from IBM Mainframes to Distributed Systems, with an objective of reducing software cost, while maintaining or improving performance.  The company name is derived from the concept of bringing disparate computing systems into close proximity, functionally speaking, providing totally seamless and transparent interoperability.  The result is a unified computing complex within which various tasks can be easily migrated between systems to their most cost efficient operating environment, while still being able to interoperate as if they were all hosted together on the same system.

The PSC Proxy Coupling Technology allows for a CPU orientated task to be offloaded from one system to another by means of an associated proxy task, which has an identical interface as the task to be offloaded, but delegates the majority of the processing to an offloaded task on another system.  The primary objective of this function are for the cost savings and/or performance improvements that might be delivered by migrating tasks to systems that are able to execute those tasks more efficiently.

The fact that the proxy task maintains the same interface as the application being replaced is crucial; as many past Mainframe migration projects have failed due to insurmountable interoperability problems between the Mainframe and Distributed Systems servers (I.E. Windows, Linux, UNIX, et al).  Proxy Coupling Technology offers a solution to this long-standing challenge.  In theory, this allows for the transparent offload of a traditional z/OS workload (E.g. Sort) from General Purpose (GP) processors, to a less expensive (E.g. IFL) alternative…

In the first instance, the Proxy Coupling Technology offloads General Purpose CPU workload associated with the z/OS sort (I.E. CA Sort, DFSORT, Syncsort) function, to another platform (E.g. IFL).  For IFL based implementations, HyperScokets are utilized to transfer data at memory speeds from the z/OS task to zLinux on the IFL, where the sort operation completes, while the resulting z/OS task and associated data are maintained, as per normal.  From an IFL viewpoint, Ahlsort software performs the sort operation, being a sort solution that maintains compatibility with the majority of z/OS sort function (I.E. Control Card Syntax).  Therefore, this is a transparent implementation, where the only consideration is how much CPU capacity is required for the offload function (E.g. IFL, x86).  The benefits are reduced z/OS MSU usage for the sort function, which can be quite significant, as most business data (E.g. Database Offloads, Customer Orientated, et al) is sorted on a daily if not more frequent basis.

Just as IBM introduced the zAAP on zIIP capability, which allowed some customers to more easily justify a specialty engine (I.E. zIIP), combining workloads to exploit the full capability of the specialty engine; in theory the same ethos applies with the Proxy Coupling Technology.  For the avoidance of doubt, workloads that can be processed on an IFL, such as z/OS sort tasks, can assist in delivering higher Return On Investment (ROI) levels for the IFL, for example:

  • Reduced z/OS WLC MSU usage (I.E. Sort function offload) and associated software costs savings
  • IFL processors run at Full Speed and do not add to traditional workload (I.E. z/OS) software costs
  • Utilize any spare IFL CPU resource not used, releasing General Purpose CPU resource for other work

In conclusion, the Proxy Coupling Technology offers a proposition that is similar to the IBM philosophy of reducing z/OS software costs via specialty engines.  Seemingly to date, primarily only the zIIP and zAAP specialty engines were available to optimize CPU usage for z/OS workloads.  Offloading CPU cycles and thus MSU workload to IFL makes sense, utilizing a cost efficient and indeed a full power CPU engine, where for cost reasons, maybe the majority of z/OS customers don’t deploy the “highest” derivative of General Purpose CPU engine available to them.  On the face of it, the realm of possibility exists for other workloads to benefit from z/OS to IFL CPU offload, following sort, which seems to make sense as the first workload to utilize this solution.