zAPI: System z Deployment Into The API Economy

Having been in the IT industry for 35+ years, I have always fully embraced and learned new technologies, to find strategic solutions for business challenges.  Obviously, starting in 1980, my heritage is IBM Mainframe, supplemented by UNIX, Wintel and Linux along the way.  Each and every platform has its merits, and during this 35+ year period, I have attended many conferences, for all platforms.  What I have noticed during this period is the attendance of many IBM Mainframe CIO, CTO or Chief Architect individuals at non-IBM Mainframe conferences, but very few, if any, equivalent Distributed Systems personnel at IBM Mainframe conferences.

I’m always surprised and disappointed to hear about organizations talking about decommissioning the IBM Mainframe platform, with tenuous reasons, based on Distributed Systems FUD messaging, as opposed to their own business requirements.  Thankfully these scenarios are decreasing over the years.  Presumably if an organization decides to migrate from one Distributed Systems platform to another or perhaps the Cloud, they do at least attend the relevant platform conferences to make an informed decision.

Over the last 25 years or so, IBM themselves compete with differing divisions and options, whether UNIX (AIX), System z and in recent years, Linux on z Systems, most notably with the LinuxONE launch at LinuxCon 2015.  One would hope that the world’s key IT decision makers might attend LinuxCon with an open mind and learn more about the System z Mainframe?

A ridiculous notion might be that one server platform technology can satisfy a 21st Century organizations IT infrastructure for their mission critical services.  Clearly that has not been the case since the advent of Client Server and today’s emerging Digital business requires an infrastructure of multiple layers, where the underlying server technology is somewhat arbitrary, and arguably a commodity resource.  Conversely the underlying data and associated applications differentiate one business from another, delivering business value and competitive edge.

Let’s take some time to consider this IT architecture design, which very quickly dismisses any notion that one server technology delivers all business requirements:

Such an architecture diagram does not impose any technology decisions.  Conversely it explores the “data journey” from access or creation, via Systems of Engagement (SoE) to eventual storage within Systems of Record (SOR) data repositories (I.E. Database).  Some might say it was forever thus, with the exception of the Multi-Channel SDK’s & API’s layer, where the savvy organizations will embrace DevOps, Hybrid Cloud and connectivity (I.E. API, SDK) solutions, seamlessly integrating modern agile applications, with that most valuable business asset, Systems of Record (SoR) data.

Today’s Application Developer doesn’t need to concern themselves as to the platform used for their DevOps application processes, the Transaction Server or indeed the Database Server.  Sure, several decades ago, maybe even a decade ago, application code was deeply associated if not confined to a specific CPU server architecture.  Clearly that is no longer the case.  Any organization that still thinks in this legacy manner, is behind the times, and this is unfortunate.  Associating such outdated thinking with the System z Mainframe is arguably careless, and not a reason for dismissing an incumbent System z platform, or not considering a System z platform in the future.

Arguably the greatest strengths of today’s System z IBM Mainframe, currently packaged as the z13 or LinuxONE, are as a Database Server (E.g. DB2), Transaction Server (E.g. CICS, WebSphere Application Server) and Security Server (E.g. ACF2, RACF, Top Secret).  From a LinuxONE viewpoint, it’s just another server, capable of processing all of the latest strategic Open Source and Commercial Off The Shelf (COTS) Cloud, Database and Application solutions, while benefitting from the unparalleled System z Quality of Service (QoS) attributes.

However, for those organizations already deploying a System z Mainframe, its greatest perceived issue is TCO.  Without doubt the convoluted and intricate Workload Licence Charges (WLC) are unnecessarily complicated and perceived as being very expensive.  Optimizing these costs requires a modicum of expertise, safeguarding that the best contractual conditions are negotiated.  However, I encounter the same complexities with Distributed Systems platforms, where software license costs can spiral out of control for significant CPU capacity deployments.  Whatever platform is deployed, System z Mainframe or Distributed System, unless the business has the requisite skills in place, technical and commercial, to safeguard the lowest cost possible, commercial ISV suppliers will take advantage of such an oversight.

I’m not advocating any server technology, System z Mainframe, Distributed System or Cloud, as each resource has its merits, depending on the business requirement.  However, today’s 21st Century organization must enable new business channels by leveraging from and arguably enable new business channels by monetizing their Systems of Record (SoR) enterprise data.

Today, organizations need to consider an API Economy, where they expose their internal digital business assets or services in the form of Web API services to external 3rd party partners and consumers, with an overall objective of unlocking increased business value via the creation of new assets.  Such an API Economy will require integration of Transaction and Data resources, specifically:

  • Centrally manage the consumption of enterprise wide business logic, for both Systems of Record (SoR) & Systems of Engagement (SoE)
  • Extend business (E.g. Product, Brand) reach from Systems of Record (SoR), incorporation Systems of Engagement (SoE)

Previously I wrote about How to Connect Mobile Workloads to System z, detailing the conceptual steps required to expose existing SoR data assets with SoE transaction services, via z/OS Connect.  For a fully integrated end-to-end integrated solution, we must also consider the Application Programming Interfaces (API), being the digital glue that seamlessly links applications, services and systems together.

IBM API Connect is a solution that manages the API lifecycle for both On-Premises and Cloud environments.  IBM API Connect delivers capabilities to Create, Run, Manage & Secure API resources and Microservices.  It also enables you to rapidly deploy and simplify API administration, across the organization.

API Connect can be deployed On-Premises via Linux on z Systems, in the cloud (E.g. Bluemix), as well as all other popular Distributed Systems.  Once again, the main message is that the chosen server is arbitrary, System z Mainframe, Distributed System or Cloud.  The server should be considered as a commodity resource, leveraging from existing business logic (I.E. SoE) and data (I.E. SoR), while evolving existing Application Lifecycle Management (E.g. Agile, API Economy, DevOps) is the key.

My final observation is the Mainframe Baby Boomer (E.g. Born ~1960) versus the Millennial (E.g. Born ~1995) technical personnel resource.  Without doubt, there are significant differences in their approach to application programming, but only one resource, namely the Baby Boomer knows the business really well.  I think these folks have the ability to learn another 21st Century programming language, as well as COBOL, but perhaps their best attribute is an analytical role, especially for the integration of SoE and SoR layers.  Working very closely with Millennial technical resources, delivering the new Application (I.E. App, API) resources, the Mainframe Baby Boomer still has something valuable to offer in their final employment years.  For the avoidance of doubt, still delivering value from an analytical viewpoint, while transferring their skills and knowledge to their successors, namely the Millennial.

In conclusion, dismissing any server technology for Fear, Uncertainty or Doubt (FUD) reasons, is an unproductive and ridiculous notion.  More importantly, what might your business lose in opportunity, spending several years or more, migrating from one platform to another, while your competitors are embracing the Digital Age with an API Economy approach, delivering more value from their existing business SoE (transactions) and SoR (data) assets?

Are You Ready For z Systems Workload Pricing for Cloud (zWPC) for z/OS?

Recently IBM announced the z Systems Workload Pricing for Cloud (zWPC) for z/OS pricing mechanism, which can minimize the impact of new Public Cloud workload transactions on Sub-Capacity license charges.  Such benefits will be delivered where higher Public Cloud workload transaction volumes may cause a spike in machine utilization.  Of course, if this looks familiar and you have that feeling of déjà vu, this is a very similar mechanism to Mobile Workload Pricing (MWP)…

Put simply, zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms, for the usual MLC software suspects, namely z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public Cloud workloads are defined as transactions processed by named Public Cloud applications transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, et al.

As per MWP, SCRT calculates the R4HA for Public Cloud transaction GP MSU resource usage, subtracting 60% of those values from the traditional Sub-Capacity software eligible MSU metric, with LPAR granularity, for each and every reporting hour.  The software program values for the same hour are aggregated for all Sub-Capacity eligible LPARs, deriving an adjusted Sub-Capacity value for each reporting hour.  Therefore SCRT determines the billable MSU peak for a given MLC software program on a CPC using the adjusted MSU values.  As per MWP, this will only be of benefit, if the Public Cloud originated transactions generate a spike in the current R4HA.

One of the major challenges for implementing MWP was identifying those transactions eligible for consideration.  Very quickly IBM identified this challenge and offered a WorkLoad Manager (WLM) based solution, to simplify reporting for all concerned.  This WLM SPE (OA47042), introduced a new transaction level attribute in WLM classification, allowing for identification of mobile transactions and associated processor consumption.  These Reporting Attributes were classified as NONE, MOBILE, CATEGORYA and CATEGORYB.  Obviously IBM made allowances for future workload classifications, hence it would seem Public Cloud will supplement Mobile transactions.

In a previous z/OS Workload Manager (WLM): Balancing Cost & Performance blog post, we considered the merits of WLM for optimizing z/OS software costs, while maintaining optimal performance.  One must draw one’s own conclusions, but there seemed to be a strong case for WLM reporting to be included in the z/OS MLC Cost Manager toolkit.  The introduction of zWPC, being analogous to MWP, where reporting can be simplified with supplied and supported WLM function, indicates that intelligent and proactive WLM reporting makes sense.  Certainly for 3rd party Soft-Capping solutions, the ability to identify MWP and zWPC eligible transactions in real-time, proactively implementing MSU optimization activities seems mandatory.

The Workload X-Ray (WLXR) solution from zIT Consulting delivers this WLM reporting function, seamlessly integrating with their zDynaCap and zPrice Manager MSU optimization solutions.  Of course, there is always the possibility to create your own bespoke reports to extract the relevant information from SMF records and subsystem diagnostic data, for input to the SCRT process.  However, such a home-grown process will only work on a monthly reporting basis and not integrate with any Soft-Capping MSU management, which will ultimately control z/OS MLC costs.

In conclusion, from a big picture viewpoint, in the last 2 years or so, IBM have introduced several new Sub-Capacity pricing mechanisms to help System z Mainframe users optimize z/OS MLC costs, namely Mobile Workload Pricing (MWP), Country Multiplex Pricing (CMP) and now z Systems Workload Pricing for Cloud (zWPC).  In theory, at least one of these new pricing mechanisms should deliver benefit to the committed System z user, deploying this server for strategic and Mission Critical workloads.  With the undoubted strategic importance associated with Analytics, Blockchain, Cloud, DevOps, Mobile, Social, et al, the landscape for System z workloads is rapidly evolving and potentially impacting those sacrosanct legacy Mission Critical workloads.  Seemingly the realm of possibility exists that Cloud and Mobile originated transactions will dominate access to System z Mainframe System Of Record (SOR) data repositories, which generates a requirement to optimize associated MLC costs accordingly.  Of course, for some System z users, such Cloud and Mobile access might not be on today’s to-do list, but inevitably it’s on the horizon, and so why not implement the instrumentation ability ASAP!

All Flash & Substance – Is The System z Microsecond The New Millisecond?

Is 2016 the year of the All Flash disk array?  Seemingly from a System z perspective, 2016 has seen improvement in the All Flash disk array offerings from the major disk suppliers, namely EMC, HDS and IBM.  From a usability perspective, managing latency might be the overall challenge, where these ultra-fast SSD systems are delivering I/O performance response times measured in the ~250-500 Microsecond (μs) range, potentially consigning the traditional Millisecond (ms) measurement to history!

Whatever the speeds and feeds might be, as of 2016, the benchmark for a System z All Flash Disk Array is seemingly measured @ <500 Microsecond μs response time, supporting ~n PB of capacity and delivering ~nnn GB/S throughput for mixed read/write workloads.  Of course, strong encryption, typically full disk Data @ Rest Encryption (D@RE) based and full seamless data replication interoperability are also mandatory.

Historically we evolved from Data Processing to Information Technology, not only automating the capture and processing of data, but gradually evolving our processes, using this data for business advantage.  In recent years, the information explosion has dictated that each and every business must be a cognitive business, using intelligent analytics to gain insight and faster decision-making from the business data collected.

Currently the Internet of Things (IoT) supplements the medium-term Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative, being the processes and associated solutions required by cognitive businesses to make timely and informed decisions, capturing deeper customer insight, ultimately delivering competitive advantage.  Therefore the 21st century business generates a significant requirement for storage capacity and performance to fully realize the benefits of this truly business aligned cognitive approach.

The largest global organizations from all verticals leverage from the power and true 24*7*365 availability and reliability of the System z Mainframe to power enormous relational databases, processing millions of customer transactions on an hourly basis.  These always-on, mission critical business environments demand the performance, reliability, TCO and System z platform integration delivered by the associated DASD (3390) subsystem.

Each and every System z user will have their IHV of choice for delivering disk storage, in alphabetical order, EMC (E.g. VMAX AFA/All Flash Array), HDS (HAF/Hitachi Accelerated Flash) or IBM (E.g. DS8888).  The choice of disk storage was forever thus, reviewing the market place and choosing the best option for your business.  What might require reflection is how the DASD I/O subsystem is managed and the associated interaction with said IHV supplier.  Systems Management solutions such as Easy Disk Analyze Mainframe (EADM) and IntelliMagic Vision (Disk Magic) will certainly simplify the analysis and presentation of DASD subsystem performance data.

However the emphasis of the actual System z DASD subsystem for an All Flash array might move from being an internal consideration, to a direct and timely communication with the IHV supplier.  Put very simply, in an environment where Mission Critical systems rely upon ultra fast processing of massive amounts of data, any flash memory issues, whether capacity or defect related will need IHV interaction ASAP, arguably “Before The Event”.  As with the System z Server itself, where we’re used to On/Off Capacity on Demand (OOCoD) processes, maybe we need to consider a similar approach with our All Flash System z DASD arrays.  For the avoidance of doubt, as opposed to waiting for an issue to impact our business, maybe we need to work smarter with our IHV, to safeguard that sufficient flash memory is available, to proactively resolve capacity or defect issues…

Aligning this with our traditional 3390 DASD I/O subsystem analysis, which might have been on a daily basis, from the rich RMF/CMF data resource, we must fully automate this process to minimize or eliminate the Mean Time To Resolution (MTTR)!  The ultimate benefit will be the delivery of meaningful messages that incorporate our 3rd party IHV supplier, who potentially with Remote Support Facility (RSF) type processes, deploys the “Golden Screwdriver” to seamlessly safeguard the performance profiles of our Mission Critical business applications, leveraging from the latest All Flash disk array.

In conclusion, as always, technology can deliver business benefits, with substance, and this includes All Flash disk arrays.  As always, what might need to evolve are the associated Systems Management processes.  Therefore asking yet another potential rhetorical question, what is more important, the System z Server or timely data access?  The diplomatic answer is that they’re equally important and if so, let’s safeguard the availability of All Flash memory for our DASD subsystems, with the requisite levels of meaningful proactive reporting and IHV supplier interaction.

Blockchain: A New Application Development Paradigm – What About System z?

Since the inception of Data Processing and the advent of the IBM Mainframe there has been a progressive movement to deliver the de facto “System Of Record (SOR)”, typically classified as a centralised database and related applications.  The key or common denominator for this “Golden Record” is somewhat arbitrary, but more often than not, for most businesses, it will be customer or product identity related.  The benefit of identifying and establishing an SOR is the reuse of this data, for a multitude of different business usage scenarios.

From an application programming viewpoint, historically there was a structured approach when delivering new business function, whether with bespoke programs or Commercial Off the Shelf (COTS) software packages.  More recently data analytics has accelerated this approach, where new business opportunities can be identified from data trends, with near real-time processing, while DevOps frameworks allow for rapid application delivery and implementation.  However, what if there was a new approach with a different type of database and as a consequence, a new approach to application programming?

From a simplistic viewpoint, Blockchain architecture is analogous to traditional database processing, whereas the interaction with said Blockchain database is vastly different, changing from a centralised to decentralised focus.  Therefore for application developers, Blockchain is a paradigm shifting architecture, in how software applications will be architected and coded.  Recognition of this new and rapidly emerging computing paradigm is of vital importance, because it’s the cornerstone for the creation of decentralised applications, a logical and natural evolution from distributed computing architectural constructs.

If we take some time to step back from the Information Technology world and consider the possibilities when comparing a centralised versus decentralised approach, the realm of possibility exists for a truly global interconnectivity approach, which isn’t limited to a specific discrete focus (E.g. Governance, Market, Business Sector, et al).  In theory, decentralised applications might deliver a dynamic and highly collaborative business approach…

A Blockchain is a pseudo linear container space (block) to store data for “controlled public usage”.  In theory, with the right credentials, this data can be accessed by any user!  The Blockchain container is secured with the originators key, so only the key holder or authorised program can unlock the container data.  This is the fundamental difference between a database and a Blockchain.  For a Blockchain, the header record can be considered “eligible for Public usage”.

The data stored within a Blockchain might be considered as a “token”, the most obvious implementation being Bitcoin.  Generically, Blockchain might be considered as an alternative and flexible data transfer system that no private or public authority and especially a malicious third party can tamper with, because of the encryption process.  Put really simply, the data header has “Public” visibility, but data access requires “Private” authenticated access.

From a high-level viewpoint, Blockchain can be considered as an architectural approach, connecting an infinite a number of peer computers, collaborating with a generic process for releasing or recording data, based upon cryptographic transactions.

One must draw one’s own conclusions as to whether this Centralised to Distributed to Decentralised data and application programming approach is the way forward for their business.

Decentralised Consensus is the inverse of a centralised approach where one central database was accessed to validate transaction processing.  A decentralised scheme transfers authority and trust to a decentralised virtual network, enabling processing nodes to continuously access or record transactions within a public block, creating a unique chain for modification operations, hence the Blockchain terminology.  Each successive data block contains a unique fingerprint (hash) of the previous code.  The basic premise of cryptographic processing applies, where hash codes are used to secure transaction origination authentication, eliminating the requirement for centralised processing. Duplicate transaction processing is eliminated because of Blockchain and associated cryptographic processing.

This separation of consensus (data access) from the actual application itself is the fundamental building block for a decentralised application programming approach.

Smart Contracts are the building blocks for decentralised applications.  A smart contract is a small self-contained program that you entrust with a value unit (token) and associated rules.  The simple philosophy of a smart contract is to programmatically facilitate transactional contractual governance between two or more parties via the Blockchain.  This eliminates the requirement of an arbitrary 3rd party authority for governance, when two or more parties can agree exchange between themselves.  Even today, this type of approach is not unusual between organizations, typically based upon a data (file) interchange standard (E.g. Banking).

Put simply, smart contracts eliminate the requirements of 3rd party intermediaries for transaction processing.  Ideally, the collaborating parties define and agree the required policy, embedded inside the business transaction, enabling a self-managed process between nodes (computers) that represent the reciprocal interests of the associated users and owners.

Trusted Computing combines the architectural foundations of Blockchain, decentralised consensus and smart contracts, enabling the spread of resources and transactions with a trusted “peer-to-peer” relationship, in theory enabling trust between numerous nodes (computers).

Previously institutions and central organizations were necessary as trusted authorities.  Deploying a Blockchain approach, these historical centralised central functions can be simplified via smart contracts, governed by decentralised consensus within a Blockchain.

Proof of Work is an important concept to identify the unequivocal authenticator of transactions, allowing the authorised access to participate in the Blockchain system.  Proof of work is a fundamental building block because once created, it cannot be modified, being secured by cryptographic hashes that ensure its authenticity.  Usability challenges ensue, preventing users from changing Blockchain records, without reprocessing the “proof of work”.

It therefore follows, proof of work will be expensive to maintain, with likely future scalability and security issues, depending on the data user (miner) requirements and incentives, which in all likelihood, will reduce over time.  As we all know, most data access is high when data has been recently created, rapidly decreasing to low or even null after a limited period of time.

Proof of Stake is a more elegant and alternative approach, determining which user can update the consensus, while preventing unwanted forking of the underlying Blockchain, being a more cost efficient approach, while being more difficult and expensive to compromise.

Once again, if we consider the benefits of Blockchain from a business processing viewpoint, there is a clear and present opportunity to eliminate manual or semi-automated processes, both internal and external to the business.  This could expedite the completion of processes that previously required days or even weeks to complete and the potential for human error.  A simple example might be a car purchase, based upon 3rd party finance.  Such a process typically includes 3rd party data requirements, for vehicle provenance, credit scoring, identity proof, et al.  If the business world looks at the big picture, they can simplify and automate their processes, by collaborating with existing and more likely, yet to be identified partners.  The benefits are patently obvious…

From a System z viewpoint, recent technological developments leverage from existing IBM resources, including the LinuxONE, Bluemix and Watson offerings:

  • LinuxONE: The System z and LinuxONE platforms are best placed to drive Blockchain innovation, arguably via the Open Mainframe and Hyperedger IBM supports testing and development of the open Blockchain fabric code for developers on their LinuxONE Community Cloud.
  • Bluemix: the IBM Blockchain services available on Bluemix, developers can access fully integrated DevOps tools for creating, deploying, running and monitoring Blockchain applications on the IBM Cloud.
  • Watson: Leveraging from the Watson IoT Platform, IBM will enable information from devices such as RFID-based locations, barcode-scan events or device-reported data, to be used within the IBM Blockchain. Devices will be able to communicate to Blockchain based ledgers to update or validate smart contracts.

From a business benefits viewpoint, the IBM System z platform is ideally placed for Blockchain deployment, being a highly secure EAL5+ certified platform.  Hardware accelerators deliver high speed secure encryption and hashing, supplemented by tamper-proof security Crypto Express modules for key management.  Numerous memory resident partitions can also be created rapidly to keep ledgers separate and secure.  As per usual, the System z platform has the fastest commercial processor, a highly scalable I/O system to handle massive numbers of transactions, ample memory for Blockchain operations and an optimised secure network for optimised Blockchain peer communications.

Returning full circle to where this article started, the System z Mainframe is arguably the de facto System Of Record platform for the worlds traditional Fortune 500 or Global 2000 businesses.  These well established businesses have in all likelihood spent several decades or more establishing this centralised application programming and database usage model.  The realm of opportunity exists to make this priceless data asset available to numerous businesses, both large and small via Blockchain architectures.  If we consider just one simple example, a highly globalised and significant Banking institution could facilitate the creation of a new specialised and optimised “challenger banking” operation, for a particular location or business sector, leveraging from their own internal System Of Record data and perhaps, vital data from another source.  One could have the hypothetical debate as to whether a well-established bank is best placed for such a new offering, but with intelligent collaboration, delivering a valuable service to a new market, where such a service has not been previously possible, doesn’t everybody win?

Perhaps with Blockchain, truly open and collaborative cooperation is possible, both from a business and technology viewpoint.  For example, why wouldn’t one of the new Fortune 500 companies such as a Social Media company with billions of users, look to a traditional Fortune 500 company deploying an IBM System z Mainframe, to expand their revenue portfolio from being advertising driven, to include service provision, whatever that might be.  Rightly or wrongly, if such a Social Media company is a user’s preferred portal for accessing a plethora of other company resources (E.g. Facebook Login), why wouldn’t this user want to fully process some other business transaction (E.g. Financial) via said platform?  However unlikely, maybe Blockchain can truly simplify and expedite Globalisation, for the benefit of users and businesses alike…

The IBM Mainframe: A Several Year Hardware Refresh Cycle?

Typically a new generation of IBM Mainframe server is released every three years or so, along with a number of function and performance upgrades.  In 2003, IBM released their Mainframe Charter that included a statement:

IBM lowered MSU values incorporated in the z990 microcode by approximately 10 percent, resulting in IBM software savings for IBM zSeries software products with MSU-based pricing.  These reduced MSUs do not indicate a change in machine performance. Superior performance and technology within the z990 has allowed IBM to provide improved software prices for key IBM zSeries operating system and middleware software products.

This terminology was named by some as the “Technology Dividend” where put simply, when upgrading IBM Mainframe servers, users would benefit from a ~10%+ software price versus performance benefit.  However, the z10 server model was the last IBM Mainframe series that benefitted from this hardware CPU chip related performance benefit.  Subsequent IBM Mainframe models have compensated for this slowing of hardware performance increase, by compensating with AWLC and AEWLC pricing models.  Therefore, unless your business has an absolute need for the “latest and greatest” IBM Mainframe server hardware, the realm of possibility exists that your business can extend the useful and cost efficient lifetime of your IBM Mainframe asset beyond the typical three year period…

As we all know, with every IT platform, there is a strong correlation between server hardware and associated Operating System.  Arguably the IBM Mainframe server has the best compatibility attribute, where there are many server hardware and Operating System interoperability scenarios.  A recent Statement Of Direction (SOD) for z/OS states:

Going forward, IBM intends to make new z/OS and z/OSMF releases available approximately every two years. Such a schedule would be intended to provide you with sufficient time to plan for new releases and to leverage them for the most business value. In addition, beginning with z/OS Version 2, IBM plans to provide five years of z/OS support, with three years of optional, fee-based extended service (5+3) as part of the new release cadence. Beginning with z/OSMF Version 2, IBM also plans to provide five years of z/OSMF support. However, similar to z/OSMF Version 1, optional extended service is not planned to be available for z/OSMF Version 2.

In addition, in z/OS V2.1, IBM plans to further leverage enhancements in the current IBM mainframe servers and storage control units. z/OS V2.1 is planned to IPL only on System z9 and later servers. Also, z/OS Version 2 is planned to require 3990 Model 3 (3990-3), 3990 Model 6 (3990-6), and later storage control units.

In attempt to simplify this scenario, in theory an IBM Mainframe customer could benefit from 5 years z/OS Version 2 support, with an IBM z9 or newer server.  In addition, this support could be extended for a further 3 years, for an extended service fee.  Therefore, from a software support perspective, there are no tangible cost considerations for extending the asset life of an IBM Mainframe from a 3 to 5 year cycle.

We must then consider the End of Marketing (EOM), also known as Withdrawal From Marketing (WDFM) and End Of Service (EOS) life cycles for the IBM Mainframe Server (Hardware).  Once again, when compared to other non-Mainframe platforms, the IBM Mainframe Server demonstrates an arguably unparalleled support cycle, where in the last 20 years or more, an average of 4.2 years sales and service, supplemented by an additional average of 7.1 years additional service applies.  Once again, as per z/OS Operating System support, the realm of possibility exists for extending the typical 3 year hardware refresh cycle to 5 years or longer.

When considering IBM Mainframe server hardware provision and support, there is one subtle difference that is not necessarily obvious, especially for those organizations that refresh their IBM Mainframe server every 3 years or so.  Clearly and stating the obvious, only IBM or a highly certified IBM System z Business partner can supply a latest generation IBM System z server or field upgrade option.  Conversely, there are a higher number of certified organizations that can provide IBM Mainframe hardware support services, allowing for a competitive and healthy 3rd party market for these services.  Additionally these companies also maintain inventories of equipment and have access to Microcode and Firmware upgrades that offer a possibility for performing field upgrades of EOM/WDFM servers.  One such company with a longevity and good track record of providing these value-added IBM Mainframe services from The United Kingdom is Blue Chip Customer Engineering.  As per any other competitive market place, arguably each and every IBM Mainframe user might consider obtaining a comparative hardware support services quotation for their business, whether they’re using the current latest and greatest IBM System z server model, or a slightly older (E.g. 4-8+ Years) model.

In conclusion, there are always options for the cost savvy business to reduce costs.  In the IBM Mainframe environment, soft capping via standard IBM Defined Capacity (DC) or Group Capacity Limit (GCL) function is an option, intelligent soft capping via a 3rd party product such as zDynaCap might be an option, or leveraging from the latest Absolute Capping IBM feature also applies.  Moreover, exploring the 3rd party hardware support services market might prove to be a very simple and commercial exercise that could decrease IBM Mainframe TCO, while extending asset life accordingly.

The IBM Mainframe: A Closed & Difficult To Commission Platform?

All too often in life we are led to believe that some things are just too difficult to achieve.  Sometimes we believe them, but hopefully, more often than not, the human spirit wins and we try to achieve the allegedly impossible.  Some might call it reverse psychology; I learned this myself as a 17 year old, having a serious leg injury having been knocked off my motor cycle, where the surgeon said “you will never walk without a limp or play sport again”.  Luckily, I thought differently and worked very hard to prove the surgeon wrong.  The surgeon wasn’t wrong, he was just far more experienced than myself and this was his way of telling me to do the hard work…

In the last several decades or so, there will be many instances of scenarios where people have stated that the IBM Mainframe is just too difficult to operate; too expensive to even consider and in general, just the preserve of an aging workforce, which will inevitably become extinct, just like the dinosaurs.  Of course, such a viewpoint isn’t necessarily balanced and pragmatic and the IBM Mainframe community, supplier and customers alike have safeguarded the longevity and strategic importance of this platform.

Having worked with the IBM Mainframe platform for ~35 years, one of the most inspiring and can do scenarios I have encountered was articulated at the recent SHARE Winter 2016 conference.  In a session named, I Just Bought an IBM z890 – Now What?, Connor Krukosky a student from Cecil Community College articulates how he commissioned a used z890 in his home environment for $340.60!  The several hundred dollars cost is impressive, but the most impressive aspect of this story is the can do attitude of Connor and the community spirit of those who assisted them.

In a timeframe where very young students can learn programming with low cost platforms such as the Raspberry Pi and The BBC micro:bit, isn’t it great that we can see a young adult student find a seemingly obsolete Mainframe platform via an on-line auction site and then find a way of commissioning that platform once again?

As always, where there is a will there is a way, and if you look closely enough at this scenario, even if you don’t know anything about the IBM Mainframe platform, you might just learn that even an IBM Mainframe first released in 2004 can be considered as an “open system” and with a “can do attitude”, can be implemented with little or no experience.

As for Connor Krukosky, good luck young man and great job!  I hope you find a great job in your chosen field and if it’s working with the System z platform, we welcome you to our open and proud community.

z/OS Workload Manager (WLM): Balancing Cost & Performance

A sophisticated mechanism is required to orchestrate the allocation of System z resources (E.g. CPU, Memory, I/O) to multiple z/OS workloads, requiring differing business processing priorities. Put very simply, a mechanism is required to translate business processing requirements (I.E. SLA) into an automated and equitable z/OS performance manager. Such a mechanism will safeguard the highest possible throughput, while delivering the best possible system responsiveness. Ideally, such a mechanism will assist in delivering this optimal performance, for the lowest cost; for z/OS, primarily Workload License Charges (WLC) related. Of course, the Workload Management (WLM) z/OS Operating System component delivers this functionality.

A rhetorical question for all z/OS Performance Managers and z/OS MLC Cost Managers would be “how much importance does your organization place on WLM and how proactively do you manage this seemingly pivotal z/OS component”? In essence, this seems like a ridiculous question, yet there is evidence that suggests many organizations, both customer and ISV alike, don’t necessarily consider WLM to be a fundamental or high priority performance management discipline. Let’s consider several reasons why WLM is a fundamental component in balancing cost and performance for each and every z/OS environment:

  • CPU (MSU) Resource Capping: Whatever the capping method (I.E. Absolute, Hard, Soft), WLM is a controlling mechanism, typically in conjunction with PR/SM, determining when capping is initiated, how it is managed and when it is terminated. Therefore from a dispassionate viewpoint, any 3rd party ISV product that performs MSU optimization via soft capping mechanisms should ideally consider the same CPU (E.g. SMF Type 70, 72, 99) instrumentation data as WLM. Some solutions don’t offer this granularity (E.g. AutoSoftCapping, iCap).
  • MLC R4HA Cost Management: WLM is the fundamental mechanism for controlling this #1 System z software TCO component; namely WLM collects 48 consecutive metric CPU MSU resource usage every 5 Minutes, commonly known as the Rolling 4 Hour Average (R4HA). In an ideal world, an optimally managed workload that generates a “valid monthly peak”, will fully utilize this “already paid for” available CPU MSU resource for the remainder of the MLC eligible month (I.E. Start of the 2nd day in a calendar month, to the end of the 1st day in the next calendar month). More recently, Country Multiplex Pricing (CMP) allows an organization to move workloads between System z server (I.E. CPC) structures, without cost consideration for cumulative R4HA peaks. Similarly, Mobile Workload Pricing (MWP) reporting will be simplified with WLM service definitions in z/OS 2.2. Therefore it seems prudent that real-time WLM management, both in terms of real-time reporting and pro-active decision making makes sense.
  • System z Server CPU Management: As System z server CPU chips evolve (E.g. CPU Chip Cache Hierarchy and Relative Nest Intensity), there are complementary changes to the z/OS Operating System management components. For example, HiperDispatch Mode delivers CPU resource usage benefit, considering CPU chip cache resources, intelligently allocating workload to as few logical processors as possible. It therefore follows that prioritization of workloads via WLM policy definitions becomes increasingly important. In this instance one might consider that CPU MF (SMF Type 113) and WLM Topology (SMF Type 99) are complementary reporting techniques for System z server design and management.

Since its announcement in September 1994 (I.E. MVS/ESA Version 5), WLM has evolved to become a fully-rounded and highly capable z/OS System Resources Manager (SRM), simply translating business prioritization policies into dynamic function, optimizing System z CPU, Memory and I/O resources. More recently, WLM continues to simplify the management of CPU chip cache hierarchy resources, while reporting abilities gain in strength, with topology reporting and the promise of simplified MWP reporting. Moreover, WLM resource management becomes more granular and seemingly the realm of possibility exists to “micro manage” System z performance, as and if required. Conversely, WLM provides the opportunity to simplify System z performance management, with intelligent workload differentiation (I.E. Subsystem Enclave, Batch, JES, USS, et al).

Quite simply, IBM are providing the instrumentation and tools for the 21st Century System z Performance and Software Cost Subject Matter Expert (SME) to deliver optimal performance for minimal cost. However, it is incumbent for each and every System z user to optimize software TCO, proactively implementing new processes and leveraging from System z functions accordingly.

Returning to that earlier rhetorical question about the importance of WLM; seemingly its importance is without doubt, primarily because of its instrumentation and management abilities of increasingly cache rich System z CPU chips and its fundamental role in controlling CPU MSU resource, vis-à-vis the R4HA.

Although IBM will provide the System z user with function to optimize system performance and cost, for obvious commercial reasons IBM will not reduce the base cost of System z MLC software. However, recent MLC pricing announcements, namely Country Multiplex Pricing (CMP), Mobile Workload Pricing (MWP) and Collocated Application Pricing (zCAP) provide tangible options to reduce System z MLC TCO. Therefore the System z user might need to consider how they can access real-time WLM performance metrics, intelligently combining this instrumentation data with function to intelligently optimize CPU MSU resource, managing the R4HA accordingly.

Workload X-Ray (WLXR) from zIT Consulting simplifies WLM performance reporting, enabling users to drill down into the root cause of performance variances in a very fast and easy way. WLXR assists in root cause problem determination by zooming in, starting from a high level overview, going right down to detailed Service Class performance information, such as the Performance Index (PI), showing potential bottleneck situations during peak time. Any system overhead considerations are limited, as WLXR delivers meaningful real time information on a “need to know” basis.

A fundamental design objective for WLXR is data reduction, only delivering the important information required for timely and professional workload management. Straight to the point information instead of data overload, sometimes from a plethora of data sources (E.g. SMF, System Monitors, et al). WLXR incorporates the following easy-to-use functions:

  • Simplified Data Collection & Storage: Minimal system overhead TCP/IP based agents periodically (E.g. 5, 15, 60 Minutes) collect CPU (Type 70) and WLM (Type 72) data. Performance data is stored centrally in near real-time, building a historical repository with intelligent analytics for meaningful information presentation.
  • Intelligent GUI Based Information Presentation: Meaningful decision based reports and graphs detailing CPU (E.g. MSU, R4HA, Weight) and WLM (E.g. Service Class, Performance Index, Response Time, Transaction Workflow) resource usage. A drill-down design provides a granularity of data presentation, for Management Summary to 3rd Level Technical Diagnostics use.
  • Corporate Identity Branding: A modular template design, allowing for easy corporate identity branding, with flexibility to easily add additional reports, as and if required.

Without doubt, WLM is a significant z/OS System Resources Management function, simplifying the translation of business workload requirements (I.E. Service Level Agreement) into timely and proactive allocation of major System z hardware resources (I.E. CPU, Memory, I/O). This management of System z resources has been forever thus for 20+ years, while WLM has always offered “software cost control” functionality, working with the various and evolving CPU capping techniques. What might not be so obvious, is that there is a WLM orientated price versus performance correlation, which has become more evident in the last 5 years or so. Whether Absolute Capping, HiperDispatch, Mobile Workload Pricing, Country Multiplex Pricing or evolving Soft Capping techniques, the need for System z users to integrate z/OS MLC pricing considerations alongside WLM performance based management is evident.

Historically there was not a clear and identified need for a z/OS Performance/Capacity Manager to consider MLC costs in their System z server designs. However, there is a clear and present danger that this historic modus operandi continues and there will only be one financial winner, namely IBM, with unnecessarily high MLC charges. Each and every System z user, whether large or small, can safeguard the longevity of their IBM Mainframe platform by recognizing and deploying proactive and current System z MLC cost management processes.

All too often it seems that capping can be envisaged as punitive, degrading system performance to reduce System z MLC costs. Such a notion needs to be consigned to history, with a focussed perspective on MSU optimization, where the valuable and granular MSU resource is allocated to the workload that requires such CPU resource, with near real-time performance profiling. If we perceive MSU optimization to be R4HA based and that IBM are increasing WLM function to assist this objective, CPU capping can be a benefit that does not adversely impact performance. As previously stated, once a valid R4HA peak has occurred, that high MSU watermark is available for the remainder of the MLC billing period. Similarly at a more granular level, once a workload has peaked and its MSU usage declines, the available MSU can be redirected to other workloads. With the introduction of Country Multiplex Pricing, System z users no longer need to concern themselves about creating a higher R4HA peak, when moving workloads between System z servers.

Quite simply, from the two most important perspectives, performance and cost optimization, WLM provides the majority of functionality to assist System z users get the best performance for the lowest cost. Analytics based products like Workload X-Ray (WLXR) assist this endeavour, analysing WLM data in near real-time from a performance and MLC cost perspective. It therefore follows that if this important information is also available for sophisticated MSU optimization solutions, which consider WLM performance (E.g. zDynaCap, zPrice Manager), then proactive performance and cost management follows. It’s hard to envisage how a fully-rounded MSU optimization decision can be implemented in near real-time, from an MSU optimization solution that does not consider WLM performance metrics…

IMS: The First Commercial Database Management Subsystem

If we could put a man on the Moon, could we also create a computer program to track the millions of rocket parts it takes? In 1966, the National Aeronautics and Space Administration (NASA) contractor North American Aviation (AKA Rockwell International) asked IBM that question. In response, IBM launched the world’s first commercial database management system in 1968, called the Information Control System and Data Language/Interface (ICS/DL/I). In 1969, it was renamed Information Management System (IMS).

The IMS architecture has always comprised two functions. Firstly, the database system supporting a hierarchical, tree-like structure data model (AKA IMS/DB). Secondly transaction processing software for handling complex, high-volume transactions, such as order entry, inventory management, payroll and claims processing, airline or hotel reservations, financial applications, and other transaction-oriented applications (AKA IMS/DC or IMS TM).

A unique feature of IMS is its queued system architecture, being a process that receives all transactions as they arrive and holds them until they can be processed. This allows for intelligent and commercial application processing; for example, when an airline agent enters a transaction, the automated transaction manager takes care of updating IMS, so another ticket agent doesn’t sell the same seat.

Some might say that the “business world relies on IMS” as 75+% of top Fortune 1000 companies use IMS to process more than 50 billion transactions a day, managing 15+ Million Gigabytes of mission critical business data.

From my own viewpoint, I have always enjoyed working with IMS and its arguably trail blazing functions, including but not limited to; Checkpoint Restart, Fast Path, Write Ahead Data Set (WADS), Batch Message Processing (BMP), Database Recovery Control (DBRC), et al. Whether System z or Distributed Platform product solutions or not, IMS has introduced many functions that have enhanced and optimized application processing throughout the decades. Is IMS still relevant today?

Industry analysts claim that IMS is the lowest cost transaction and hierarchical database management system for mission critical OLTP. With a TPS (Transactions Per Second) benchmark topping 117,000, IMS delivers industrial strength capabilities for managing and distributing data. IMS delivers mission critical levels of availability, performance, security and scalability. Expansive integration capabilities enable mobile and cloud applications based on IMS assets, enhanced analytics, new application development, SOA exploitation, and more.

In 2013 Gartner stated “by 2016, 40 percent of mobile application development projects will leverage cloud back-end services, causing development leaders to lose control of the pace and path of cloud adoption within their enterprises”. In this timeframe Gartner also stated “hybrid apps, which offer a balance between HTML5-based web apps and native apps, will be used in more than 50 percent of mobile apps by 2016”. Additionally, “While mobile becomes a requirement for everything, there is no single device that will meet all needs. By the end of 2013, mobile phones will overtake PCs as the most common web access device worldwide and by 2016, PC shipments will be less than 50 percent of combined PC and tablet shipments”.

As the original and ground breaking “System Of Record”, combined with industry leading OLTP performance, why wouldn’t a CIO in 2016 consider IMS as the foundation for big data and even cloud based mission critical business applications? With easy and rapid application development via solutions such as RDz and mobile application integration via z/OS Connect, accessing IMS assets has never been easier. Whatever the industry vertical, IMS has facilitated “rocket science and the man on the moon race” since day #1 in the late 1960’s, while leveraging from the unparalleled System z platform for the best scalability and performance attributes in a single footprint. A modicum of lateral thinking should consider IMS as a Service, as well as IaaS and XaaS, for resolving today’s challenges of mobile applications generating unparalleled number of transactions and associated big data requiring analytics to process rapidly evolving business requirements…

How to Connect Mobile Workloads to System z

Despite potential security concerns, primarily data encryption and multiple-factor authentication related, mobile transactions continue to increase their share of the market, accounting for up to half of online transactions. Mobile payments now account for 30%+ of all global online transactions as of Q3 2015, continuing the upward trend experienced for the last several years. Although there are global differences in mobile transaction adoption, all global locations are experiencing rapid growth in mobile transaction adoption. Furthermore, as a general rule of thumb, seemingly ~66% of mobile transactions originate from a smartphone, a ~2:1 ratio when compared with tablet devices. Therefore it seems highly probable that smartphone originated mobile transactions will become the de facto standard for online transactions…

For System z users, the majority of their TCO continues to be IBM MLC software related and seemingly the realm of possibility exists for retail operations to reduce IBM MLC TCO as a result of modernizing their business for this mobile transaction phenomenon. Recognizing the security, scalability and transaction ability of the System z platform, why wouldn’t it be the ideal platform for mobile transactions? Furthermore, deploying mobile workloads that can take advantage of modern low cost System z pricing metrics, namely System z Collocated Application Pricing (zCAP) and Mobile Workload Pricing (MWP) for z/OS, could substantially reduce IBM MLC TCO. In theory, existing legacy applications might become somewhat static in nature, as mobile transactions replace existing traditional transaction mechanisms. Therefore the cost per business transaction reduces, potentially significantly.

So, just how easy is it to connect mobile transactions to the System z platform?

z/OS Connect is a software function engineered to leverage from the Liberty Profile for z/OS, acting as an enabler of connectivity between the mobile environment (client) and the System z platform (host). Put another way, z/OS Connect exposes System z assets for mobile and cloud workloads. Quite simply z/OS Connect delivers JSON (JavaScript Object Notation) and REST (REpresentational State Transfer) functionality to leverage from existing z/OS subsystems (E.g. CICS, IMS, Batch, et al). These traditional System z transaction systems (E.g. CICS, IMS) often integrated with DB2, are repositories for vast amounts of business transactions and data. There is no incremental cost for z/OS Connect usage, being packaged with WebSphere Application Server (WAS), CICS and IMS software products.

z/OS Connect provides a discovery function allowing developers to query services that have been configured for a z/OS Connect instance. A single z/OS Connect REST call returns a list of all configured services and another REST call will return the details of a given service. Importantly, developers only need to know the REST API service and associated JSON requirements to achieve this mobile device to System z interoperability; they do not need to know the underlying CICS or IMS subsystem. z/OS Connect incorporates a data conversion function that maps JSON to the host (I.E. CICS or IMS) data format requirement. Put really simply, when a request is received, z/OS Connect converts the data for CICS or IMS subsystem processing and when a response is produced, z/OS Connect converts the data back to JSON.

From a security viewpoint, standard or bespoke code can be used for control before and after a request is processed, identified as an interceptor. For Security, the calling user identity can be checked against defined roles, determining if they have authority to use z/OS Connect or the configured service. On z/OS the security interface is SAF, supplemented by an External Security Manager (ESM), namely ACF2, RACF or TopSecret. For Audit, request information can be logged via SMF for later analysis. Information about each request is logged, including timestamp, bytes processed, response time and USERID.

To summarize, z/OS Connect is designed to simplify the integration of mobile systems and z/OS assets. Delivering a consistent front-end interface for mobile systems via REST and JSON, z/OS Connect seamlessly integrates with WAS, CICS and IMS subsystems for data processing. In theory, a developer could code a mobile workload application, with no knowledge of the System z platform.

In conclusion, it seems we have to accept the adoption of the smartphone device for processing an ever increasing amount of online transactions. The realm of possibility exists that online transactions (click) will continue to displace traditional and legacy (brick) transactions. Therefore as businesses evolve to accommodate mobile transactions, they should strive to reduce their IBM MLC TCO accordingly, delivering JSON and REST applications that can leverage from optimal cost z/OS MLC software, primarily via the zCAP and MWP pricing mechanisms. z/OS Connect is one such option that simplifies the timely delivery of mobile workload applications.