Optimizing Mission Critical Data Value – IBM Machine Learning for z/OS

Typically the IBM Z Mainframe is recognized as the de facto System Of Record (SOR) for storing Mission Critical data.  It therefore follows for generic business applications, DB2, IMS (DB) and even VSAM could be considered as database servers, while CICS and IMS (DC) are transaction servers.  Extracting value from the Mission Critical data source has always been desirable, initially transferring this valuable Mainframe data source to a Distributed Platform via ETL (Extract, Transform, Load) processes.  A whole new software and hardware ecosystem was born for these processes, typically classified as data warehousing.  This process has proved valuable for the last 20 years or so, but more recently the IT industry has evolved, embracing Artificial Intelligence (AI) technologies, ultimately generating Machine Learning capabilities.

For some, it’s important to differentiate between Artificial Intelligence and Machine Learning, so here goes!  Artificial Intelligence is an explicit Computer Science activity, endeavouring to build machines capable of intelligent behaviour.  Machine Learning is a process of evolving computing platforms to act from data patterns, without being explicitly programmed.  In the “what came first world, the chicken or the egg”?  You need AI scientists and engineers to build the smart computing platforms, but you need data scientists or pseudo machine learning experts to make these new computing platforms intelligent.

Conceptually, Machine Learning could be classified as:

  • An automated and seamless learning ability, without being explicitly programmed
  • The ability to grow, change, evolve and adapt when encountering new data
  • An ability to deliver personalized and optimized outcomes from data analysed

When considering this Machine Learning ability with the traditional ETL model, eliminating the need to move data sources from one platform to another, eradicates the “point in time” data timestamp of such a model, and any associated security exposure of the data transfer process.  Therefore, returning to the IBM Z Mainframe being the de facto System Of Record (SOR) for storing Mission Critical data, it’s imperative that the IBM Z Mainframe server delivers its own Machine Learning ability…

IBM Machine Learning for z/OS is an enterprise class machine learning platform solution, assisting the user to create, train and deploy machine learning models, extracting value from your mission critical data on IBM Z platforms, retaining the data in situ, within the IBM Z complex.

Machine Learning for z/OS integrates several IBM machine learning capabilities, including IBM z/OS Platform for Apache Spark.  It simplifies and automates the machine learning workflow, enabling collaboration on machine learning projects across personal and disciplines (E.g. Data Scientists, Business Analysts, Application Developers, et al).  Retaining your Mission Critical data in situ, on your IBM Z platforms, Machine Learning for z/OS significantly reduces the cost, complexity security risk and time for Machine Learning model creation, training and deployment.

Simplistically there are two categories of Machine Learning:

  • Supervised: A model is trained from a known set of data sources, with a target output in mind. In mathematical terms, a formulaic approach.
  • Unsupervised: There is no input or output structure and unsupervised machine learning is required to formulate results from evolving data patterns.

In theory, we have been executing supervised machine learning for some time, but unsupervised is the utopia.

Essentially Machine Learning for z/OS comprises the following functions:

  • Data ingestion (From SOR data sources, DB2, IMS, VSAM)
  • Data preparation
  • Data training and validation
  • Data evaluation
  • Data analysis deployment (predict, score, act)
  • Ongoing learning (monitor, ingestion, feedback)

For these various Machine Learning functions, several technology components are required:

  • z/OS components on z/OS (MLz scoring service, various SPARK ML libraries and CADS/HPO library)
  • Linux/x86 components (Docker images for Repository, Deployment, Training, Ingestion, Authentication and Metadata, services)

The Machine Learning for z/OS solution incorporates the following added features:

  • CADS: Cognitive Assistant for Data Scientist (helps select the best fit algorithm for training)
  • HPO: Hyper Parameter Optimization (provides the Data Scientist with optimal parameters)
  • Brunel Visualization Tool (assist the Data Scientist to understand data distribution)

Machine Learning for z/OS provides a simple framework to manage the entire machine learning workflow.  Key functions are delivered through intuitive web based GUI, a RESTful API and other programming APIs:

  • Ingest data from various sources including DB2, IMS, VSAM or Distributed Systems data sources.
  • Transform and cleanse data for algorithm input.
  • Train a model for the selected algorithm with the prepared data.
  • Evaluate the results of the trained model.
  • Intelligent and automated algorithm/model selection/model parameter optimization based on IBM Watson Cognitive Assistant for Data Science (CADS) and Hyper Parameter Optimization (HPO) technology.
  • Model management.
  • Optimized model development and Production.
  • RESTful API provision allowing Application Development to embed the prediction using the model.
  • Model status, accuracy and resource consumption monitoring.
  • An intuitive GUI wizard allowing users to easily train, evaluate and deploy a model.
  • z Systems authorization and authentication security.

In conclusion, the Machine Learning for z/OS solution delivers the requisite framework for the emerging Data Scientists to collaborate with their Business Analysts and Application Developer colleagues for delivering new business opportunities, with smarter outcomes, while lowering risk and associated costs.

Optimize Your System z ROI with z Operational Insights (zOI)

Hopefully all System z users are aware of the Monthly Licence Charge (MLC) pricing mechanisms, where a recurring charge applies each month.  This charge includes product usage rights and IBM product support.  If only it was that simple!  We then encounter the “Alphabet Soup” of acronyms, related to the various and arguably too numerous MLC pricing mechanism options.  Some might say that 13 is an unlucky number and in this case, a System z pricing specialist would need to know and understand each of the 13 pricing mechanisms in depth, safeguarding the lowest software pricing for their organization!  Perhaps we could apply the unlucky word to such a resource.  In alphabetical order, the 13 MLC pricing options are AWLC, AEWLC, CMLC, EWLC, MWLC, MzNALC, PSLC, SALC, S/390 Usage Pricing, ULC, WLC, zELC and zNALC!  These mechanisms are commercial considerations, but what about the technical perspective?

Of course, System z Mainframe CPU resource usage is measured in MSU metrics, where the usage of Sub-Capacity allows System z Mainframe users to submit SCRT reports, incorporating Monthly License Charges (MLC) and IPLA software maintenance, namely Subscription and Support (S&S).  We then must consider the Rolling 4-Hour Average (R4HA) and how best to optimize MSU accordingly.  At this juncture, we then need to consider how we measure the R4HA itself, in terms of performance tuning, so we can minimize the R4HA MSU usage, to optimize cost, without impacting Production if not overall system performance.

Finally, we then have to consider that WLC has a ~17-year longevity, having been announced in October 2000 and in that time IBM have also introduced hardware features to assist in MSU optimization.  These hardware features include zIIP, zAAP, IFL, while there are other influencing factors, such as HyperDispatch, WLM, Relative Nest Intensity (RNI), naming but a few!  The Alphabet Soup continues…

In summary, since the introduction of WLC in Q4 2000, the challenge for the System z user is significant.  They must collect the requisite instrumentation data, perform predictive modelling and fully comprehend the impact of the current 13 MLC pricing mechanisms and their interaction with the ever-evolving System z CPU chip!  In the absence of such a simple to use reporting capability from IBM, there are a plethora of 3rd party ISV solutions, which generally are overly complex and require numerous products, more often than not, from several ISV’s.  These software solutions process the instrumentation data, generating the requisite metrics that allows an informed decision making process.

Bottom Line: This is way too complex and are there any Green Shoots of an alternative option?  Are there any easy-to-use data analytics based options for reducing MSU usage and optimizing CPU resources, which can then be incorporated into any WLC/MLC pricing considerations?

In February 2016 IBM launched their z Operational Insights (zOI) offering, as a new open beta cloud-based service that analyses your System z monitoring data.  The zOI objective is to simplify the identification of System z inefficiencies, while identifying savings options with associated implementation recommendations. At this juncture, zOI still has a free edition available, but as of September 2016, it also has a full paid version with additional functionality.

Currently zOI is limited to the CICS subsystem, incorporating the following functions:

  • CICS Abend Analysis Report: Highlights the top 10 types of abend and the top 10 most abend transactions for your CICS workload from a frequency viewpoint. The resulting output classifies which CICS transactions might abend and as a consequence, waste processor time.  Of course, the System z Mainframe user will have to fix the underlying reason for the CICS abend!
  • CICS Java Offload Report: Highlights any transaction processing workload eligible for IBM z Systems Integrated Information Processor (zIIP) offload. The resulting output delivers three categories for consideration.  #1; % of existing workload that is eligible for offload, but ran on a General Purpose CP.  #2; % of workload being offloaded to zIIP.  #3; % of workload that cannot be transferred to a zIIP.
  • CICS Threadsafe Report: Highlights threadsafe eligible CICS transactions, calculating the switch count from the CICS Quasi Reentrant Task Control Block (QR TCB) per transaction and associated CPU cost. The resulting output identifies potential CPU savings by making programs threadsafe, with the associated CICS subsystem changes.
  • CICS Region CPU Constraint: Highlights CPU constrained regions. CPU constrained CICS regions have reduced performance, lower throughput and slower transaction response, impacting business performance (I.E. SLA, KPI).  From a high-level viewpoint, the resulting output classifies CICS Region performance to identify whether they’re LPAR or QR constrained, while suggesting possible remedial actions.

Clearly the potential of zOI is encouraging, being an easy-to-use solution that analyses instrumentation data, classifies the best options from a quick win basis, while providing recommendations for implementation.  Having been a recent user of this new technology myself, I would encourage each and every System z Mainframe user to try this no risk IBM z Operational Insights (zOI) software offering.

The evolution for all System z performance analysis software solutions is to build on the comprehensive analysis solutions that have evolved in the last ~20+ years, while incorporating intelligent analytics, to classify data in terms of “Biggest Impact”, identifying “Potential Savings”, evolving MIPS measurement, to BIPS (Biggest Impact Potential Savings) improvements!

IBM have also introduced a framework of IT Operations Analytics Solutions for z Systems.  This suite of interconnected products includes zOI, IBM Operations Analytics for z Systems, IBM Common Data Provider for z/OS and IBM Advanced Workload Analysis Reporter (IBM zAware).  Of course, if we lived in a perfect world, without a ~20 year MLC and WLC longevity, this might be the foundation for all of our System z CPU resource usage analysis.  Clearly this is not the case for the majority of System z Mainframe customers, but zOI does offer something different, with zero impact, both from a system impact and existing software interoperability viewpoint.

Bottom Line: Optimize Your System z ROI via zOI, Evolving From MIPS Measurement to BIPS Improvements!

Are You Ready For z Systems Workload Pricing for Cloud (zWPC) for z/OS?

Recently IBM announced the z Systems Workload Pricing for Cloud (zWPC) for z/OS pricing mechanism, which can minimize the impact of new Public Cloud workload transactions on Sub-Capacity license charges.  Such benefits will be delivered where higher Public Cloud workload transaction volumes may cause a spike in machine utilization.  Of course, if this looks familiar and you have that feeling of déjà vu, this is a very similar mechanism to Mobile Workload Pricing (MWP)…

Put simply, zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms, for the usual MLC software suspects, namely z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public Cloud workloads are defined as transactions processed by named Public Cloud applications transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, et al.

As per MWP, SCRT calculates the R4HA for Public Cloud transaction GP MSU resource usage, subtracting 60% of those values from the traditional Sub-Capacity software eligible MSU metric, with LPAR granularity, for each and every reporting hour.  The software program values for the same hour are aggregated for all Sub-Capacity eligible LPARs, deriving an adjusted Sub-Capacity value for each reporting hour.  Therefore SCRT determines the billable MSU peak for a given MLC software program on a CPC using the adjusted MSU values.  As per MWP, this will only be of benefit, if the Public Cloud originated transactions generate a spike in the current R4HA.

One of the major challenges for implementing MWP was identifying those transactions eligible for consideration.  Very quickly IBM identified this challenge and offered a WorkLoad Manager (WLM) based solution, to simplify reporting for all concerned.  This WLM SPE (OA47042), introduced a new transaction level attribute in WLM classification, allowing for identification of mobile transactions and associated processor consumption.  These Reporting Attributes were classified as NONE, MOBILE, CATEGORYA and CATEGORYB.  Obviously IBM made allowances for future workload classifications, hence it would seem Public Cloud will supplement Mobile transactions.

In a previous z/OS Workload Manager (WLM): Balancing Cost & Performance blog post, we considered the merits of WLM for optimizing z/OS software costs, while maintaining optimal performance.  One must draw one’s own conclusions, but there seemed to be a strong case for WLM reporting to be included in the z/OS MLC Cost Manager toolkit.  The introduction of zWPC, being analogous to MWP, where reporting can be simplified with supplied and supported WLM function, indicates that intelligent and proactive WLM reporting makes sense.  Certainly for 3rd party Soft-Capping solutions, the ability to identify MWP and zWPC eligible transactions in real-time, proactively implementing MSU optimization activities seems mandatory.

The Workload X-Ray (WLXR) solution from zIT Consulting delivers this WLM reporting function, seamlessly integrating with their zDynaCap and zPrice Manager MSU optimization solutions.  Of course, there is always the possibility to create your own bespoke reports to extract the relevant information from SMF records and subsystem diagnostic data, for input to the SCRT process.  However, such a home-grown process will only work on a monthly reporting basis and not integrate with any Soft-Capping MSU management, which will ultimately control z/OS MLC costs.

In conclusion, from a big picture viewpoint, in the last 2 years or so, IBM have introduced several new Sub-Capacity pricing mechanisms to help System z Mainframe users optimize z/OS MLC costs, namely Mobile Workload Pricing (MWP), Country Multiplex Pricing (CMP) and now z Systems Workload Pricing for Cloud (zWPC).  In theory, at least one of these new pricing mechanisms should deliver benefit to the committed System z user, deploying this server for strategic and Mission Critical workloads.  With the undoubted strategic importance associated with Analytics, Blockchain, Cloud, DevOps, Mobile, Social, et al, the landscape for System z workloads is rapidly evolving and potentially impacting those sacrosanct legacy Mission Critical workloads.  Seemingly the realm of possibility exists that Cloud and Mobile originated transactions will dominate access to System z Mainframe System Of Record (SOR) data repositories, which generates a requirement to optimize associated MLC costs accordingly.  Of course, for some System z users, such Cloud and Mobile access might not be on today’s to-do list, but inevitably it’s on the horizon, and so why not implement the instrumentation ability ASAP!

How to Connect Mobile Workloads to System z

Despite potential security concerns, primarily data encryption and multiple-factor authentication related, mobile transactions continue to increase their share of the market, accounting for up to half of online transactions. Mobile payments now account for 30%+ of all global online transactions as of Q3 2015, continuing the upward trend experienced for the last several years. Although there are global differences in mobile transaction adoption, all global locations are experiencing rapid growth in mobile transaction adoption. Furthermore, as a general rule of thumb, seemingly ~66% of mobile transactions originate from a smartphone, a ~2:1 ratio when compared with tablet devices. Therefore it seems highly probable that smartphone originated mobile transactions will become the de facto standard for online transactions…

For System z users, the majority of their TCO continues to be IBM MLC software related and seemingly the realm of possibility exists for retail operations to reduce IBM MLC TCO as a result of modernizing their business for this mobile transaction phenomenon. Recognizing the security, scalability and transaction ability of the System z platform, why wouldn’t it be the ideal platform for mobile transactions? Furthermore, deploying mobile workloads that can take advantage of modern low cost System z pricing metrics, namely System z Collocated Application Pricing (zCAP) and Mobile Workload Pricing (MWP) for z/OS, could substantially reduce IBM MLC TCO. In theory, existing legacy applications might become somewhat static in nature, as mobile transactions replace existing traditional transaction mechanisms. Therefore the cost per business transaction reduces, potentially significantly.

So, just how easy is it to connect mobile transactions to the System z platform?

z/OS Connect is a software function engineered to leverage from the Liberty Profile for z/OS, acting as an enabler of connectivity between the mobile environment (client) and the System z platform (host). Put another way, z/OS Connect exposes System z assets for mobile and cloud workloads. Quite simply z/OS Connect delivers JSON (JavaScript Object Notation) and REST (REpresentational State Transfer) functionality to leverage from existing z/OS subsystems (E.g. CICS, IMS, Batch, et al). These traditional System z transaction systems (E.g. CICS, IMS) often integrated with DB2, are repositories for vast amounts of business transactions and data. There is no incremental cost for z/OS Connect usage, being packaged with WebSphere Application Server (WAS), CICS and IMS software products.

z/OS Connect provides a discovery function allowing developers to query services that have been configured for a z/OS Connect instance. A single z/OS Connect REST call returns a list of all configured services and another REST call will return the details of a given service. Importantly, developers only need to know the REST API service and associated JSON requirements to achieve this mobile device to System z interoperability; they do not need to know the underlying CICS or IMS subsystem. z/OS Connect incorporates a data conversion function that maps JSON to the host (I.E. CICS or IMS) data format requirement. Put really simply, when a request is received, z/OS Connect converts the data for CICS or IMS subsystem processing and when a response is produced, z/OS Connect converts the data back to JSON.

From a security viewpoint, standard or bespoke code can be used for control before and after a request is processed, identified as an interceptor. For Security, the calling user identity can be checked against defined roles, determining if they have authority to use z/OS Connect or the configured service. On z/OS the security interface is SAF, supplemented by an External Security Manager (ESM), namely ACF2, RACF or TopSecret. For Audit, request information can be logged via SMF for later analysis. Information about each request is logged, including timestamp, bytes processed, response time and USERID.

To summarize, z/OS Connect is designed to simplify the integration of mobile systems and z/OS assets. Delivering a consistent front-end interface for mobile systems via REST and JSON, z/OS Connect seamlessly integrates with WAS, CICS and IMS subsystems for data processing. In theory, a developer could code a mobile workload application, with no knowledge of the System z platform.

In conclusion, it seems we have to accept the adoption of the smartphone device for processing an ever increasing amount of online transactions. The realm of possibility exists that online transactions (click) will continue to displace traditional and legacy (brick) transactions. Therefore as businesses evolve to accommodate mobile transactions, they should strive to reduce their IBM MLC TCO accordingly, delivering JSON and REST applications that can leverage from optimal cost z/OS MLC software, primarily via the zCAP and MWP pricing mechanisms. z/OS Connect is one such option that simplifies the timely delivery of mobile workload applications.

System z MLC Pricing Increases: Look After The Pennies…

Recently IBM announced ~4% price increases in z/OS Monthly License Charges (MLC) for selected Operating System and Middleware software programs and associated features. Specifically, price increases will apply to the VWLC, AWLC, EWLC, AEWLC, PSLC, FWLC and TWLC pricing metrics. Notably, SDSF price increases will be ~20% with Advanced Function Printing (AFP) product price increases of ~13-24%. In a global economy where inflation rates for The USA and Western Europe are close to 0%, one must draw one’s own conclusions accordingly. Lets’ not forget that product version changes typically have an associated price increase. From a contractual viewpoint, IBM only have to provide 90 days advance notice for such price changes, in this instance, IBM provided 150+ days advanced notice.

Price increases are inevitable and as always, it’s better to be proactive as opposed to reactive to such changes. As always, the old proverbs always make good sense and in this instance, “look after the Pennies and the Pounds will look after themselves”! This periodic IBM price increase is inevitable, but is not the underlying issue for controlling System z software costs. For many years, since 1994 to be precise, when IBM introduced Parallel Sysplex License Charges (PSLC), the need for IBM Mainframe users to minimize MSU usage has been of high if not critical importance. Nothing has changed in this 20+ year period and even though IBM might have introduced Sub-Capacity and specialty engines to minimize chargeable MSU usage, has each and every System z user optimized their MSU usage? Ideally this would not be a rhetorical question, rather being a “Golden Rule”, where despite organic CPU capacity increases of ~10% per annum, a System z environment could maintain near static IBM MLC software costs.

I have written several blog entries and presented on this subject matter over the years, for example:

The simple bottom line is that System z MLC software accounts for ~20-35% of the overall System z TCO, typically being the #1 expenditure item. For that reason alone, it’s incumbent for each and every System z user to safeguard they have the technical and commercial skills in place to manage this cost item, not as an afterthought, but inbuilt into each and every System z process, from application design, through to that often neglected afterthought, application tuning.

Many System z organizations might try to differentiate between a nuance of System and Application tuning, but such a “not my problem” type attitude is not acceptable and will be imposing a significant financial burden on each and every organization.

A dispassionate and pragmatic approach is required for optimizing System z CPU usage. In this timeframe, let’s examine the ~20% SDSF price increase. IBM will quite rightly state that in conjunction with their z/OS 2.2 release, there are significant SDSF product function advancements, including zIIP offload, REXX interoperability and increased information delivery. However are such function improvements over and above the norm and not expected as a Business As Usual (BAU) product improvements, which should be included in the Service & Support (S&S) or Monthly License Charges (MLC) paid for software?

In October 2013 I wrote a blog entry; Mainframe ISV Software: Is Continuous Product Improvement Always Evident? The underlying message was that an ISV should deliver the best product they can, for each and every release, without necessarily increasing software costs. In this particular instance, the product was an SDSF equivalent, namely (E)JES, which many years ago delivered all of the function incorporated in SDSF for z/OS 2.2, but for a fraction of the cost…

As of 1 November 2015, IBM will start billing cycles for Country Multiplex Pricing (CMP), which requires the October 2015 version of SCRT, namely V23R10. A Multiplex is defined as a collection of all System z servers in one country, measured as one System z server for software sub-capacity reporting. Sub-Capacity program utilization peaks across the Multiplex will be measured, as opposed to separate peaks by System z servers. CMP also provides the flexibility to move and run workloads anywhere with the elimination of Sysplex aggregation pricing rules.

Migrating to CMP is focussed on CPU capacity growth and flexibility going forward. Therefore System z users should not expect price reductions for their existing workloads upon CMP deployment. Indeed there are CMP deployment considerations. A CMP MSU baseline (base) needs to be established, where this MSU Base and associated MLC Base Factor is established for each sub-capacity MLC product and each applicable feature code. These MSU and MLC bases represent the previous 3 Month averages reported by SCRT before commencing CMP. Quite simply, to gain the most from CMP, the System z user must safeguard that their R4HA for each and every MLC product is optimized, before setting the CMP baseline, otherwise CMP related cost savings going forward are likely to be null.

From a very high-level management viewpoint, we must observe that IBM are a commercial organization, and although IBM provide mechanisms for controlling cost going forward, only the System z user can optimize System z MLC cost for their organization. Arguably with CMP, Soft-Capping isn’t a consideration, it’s mandatory.

Put very simply, each and every System z user can safeguard that they look after the Pennies (Cents) and the Pounds (Euros, Dollars) will look after themselves by paying careful attention to System z MLC software costs. Setting a baseline of System z MLC costs is mandatory, whether for the first time, or to set a new baseline for CMP deployment. Maintaining or lowering this System z MLC cost baseline should or arguably must be the objective going forward, even when considering 10% organic CPU growth, each and every year. System z decision-makers and managers must commit to such an objective and safeguard the provision of adequately skilled personnel to optimize such a considerable TCO cost line item (I.E. MLC @ ~20-35% of System z TCO). In an ecosystem with technical resources including DBA, Systems Programmer, Capacity Planner, Application Personnel, Performance Tuning, et al, why wouldn’t there be a specialist Software Cost Manager?

Let’s consider how even an inexperienced System z user can maintain a baseline of System z MLC costs, even with organic CPU capacity growth of 10% per annum:

  • System z Server Upgrade: Higher specification CPU chips or Technology Transition Offering (TTO) pricing metrics deliver 10%+ cost per MSU benefits.
  • System z Specialty Engines: Over time, more and more application workload can be offloaded to zIIP processors, with no sub-capacity MLC software charges.
  • System z Software Version Upgrades: Major subsystems such as CICS, DB2, IMS, MQSeries and WebSphere deliver opportunity to lower cost per MSU; safeguard such function exploitation.
  • Application Tuning: Whether SQL, COBOL, Java, et al, or the overall I/O subsystem, safeguard that latest programming techniques and I/O subsystem functions are exploited.
  • New Application Deployment: As and when possible, deploy new or convert existing workloads to benefit from the optimal MLC pricing metric; previously zNALC, nowadays zCAP.
  • Technical & Commercial Skills Currency: Safeguard personnel have the latest System z software pricing knowledge, ideally from an independent 3rd party such as Watson & Walker.

In conclusion, as householders we have the opportunity to optimize our cost expenditure, choosing and switching between various major cost items such as financial, utility and vehicle products. As System z users, we don’t have that option, only IBM provide System z servers and associated base architecture, namely the most expensive MLC software products, z/OS, CICS, DB2, IMS and WebSphere/MQ. However, just as we manage our domestic budgets, reducing power usage, optimizing vehicle TCO and getting more bang from our buck for financial products various, we can and must deliver this same due diligence for our System z MLC TCO. With industry averages of ~$500-$1000 per MSU for z/OS MLC software and associated annual expenditure measured in many millions, why wouldn’t any System z user look to deliver 10%+ cost per MSU optimization, year-on-year for their organization?

Clearly the cost of doing nothing in this instance, is significant, measured in magnitudes of millions, each and every year. Hence for System z MLC TCO optimization, looking after the Pennies is more than worthwhile, while the associated benefit of the Pounds, Euros or Dollars looking after themselves is arguably priceless.

CICS: The Best Enterprise Transaction Server & So Much More…

In one form or another, CICS has been available since the mid-1960’s, just about as long as the IBM Mainframe server that recently celebrated its 50th anniversary, released in 1964.  From a deployment viewpoint, 90%+ of Fortune 500 companies deploy CICS, primarily for its robust and often unbeatable ability to deliver sub-second response times for numerous application transactions.  Whether a large or small IBM Mainframe user, CICS delivers an enterprise class solution for a myriad of business types and arguably at one time or another, nearly every committed IBM Mainframe installation has implemented CICS.

In the past few decades I have encountered many failed IBM Mainframe migration projects and more often than not, the primary reason for platform migration failure was the inability of the target platform to deliver consistent sub-second transaction response times for a plethora of mission critical business applications.  Similarly, it often follows that if a non-Mainframe environment has been heavily configured to handle the CICS transaction workload, it often fails with the subsequent batch processing, suffering from significantly elongated elapsed times.

CICS has its foundations as an enterprise class transaction server, capturing data input for subsequent storage and retrieval in Database Management Subsystems, but let’s not forget, CICS can do so much more…

Let’s not forget, in 2001 CICS Transaction Server (TS) 2.1 for z/OS introduced the foundation for web services support, and a capability for CICS transactions to be invoked via HTTP requests.  There have been numerous enhancements since, too many to mention, which have evolved CICS into a fully-rounded family of solutions, allowing for cradle to grave application design and delivery.  However, let’s just take some time to review what CICS TS Version 5 has delivered, and how this might benefit the 21st Century business.

Recognizing the IBM defined Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative, CICS is integral to such a business facing initiative, primarily from a Cloud interoperability viewpoint.

The CICS V5.2 Application Server has a capability to host multiple applications and multiple versions of the same application, simultaneously, primarily due to the substantial increase of platform scalability.  Similarly, a heterogeneous code environment offers application development personnel a single environment to work seamlessly with Java and other legacy languages, such as COBOL, C/C++, PL/I, et al.

Combining this heterogeneous code environment with Cloud enablement allows for new application version deployment, without a requirement to disable or remove the previous version from Production processing.  Regardless of the underlying application source code base, end users can access an application without service interruption, as they transition to the new application version.  Similarly, user workloads can be seamlessly redirected to a previous working version of application code, presuming new version errors.

The CICS V5.2 application server delivers a powerful hosting environment for all business applications, new or old.  Application Development teams can provision applications for design and testing within a “real life working” infrastructure, removing the application when testing is complete, or promoting said application ASAP, for mission critical business usage.  This delivers a significant business improvement in application availability, minimizing service downtime.  Therefore, applications stay as current and relevant as possible, reducing the risk of business service outages, delivering better and consistent end user experiences.

As per the IBM zSeries Mainframe heritage, a standard resilience feature of the CICS V5.2 Application Server is an inherent capability to perform, even in the event of a problem scenario.  Cloud enablement delivers a clustering capability, which handles both system and application level failure scenarios.  Seamless and timely problem resolution dictates less down time, delivering more business availability, instilling a high sense of confidence in end users and consumers alike.

Noting the Security aspect of the IBM CAMSS initiative and the ever present cybersecurity risk to us all, CICS Application Server also delivers on this front.  Safeguarding that application enhancements can be brought online ASAP and securely, CICS V5.2 Application Server seamlessly integrates with various security software and languages, including the latest WebSphere Application Server (WAS) Liberty Profile, allowing for the portability of Java Enterprise Edition Web applications.  Enhanced security capabilities also include Java Naming and Directory Interface (JNDI), Bean Validation, JDBC Type 2 Data Sources and the Java Transaction API (JTAPI).  SSL support incorporated within the Liberty JVM server HTTP listener is extended to support key certificates stored in System Authorization Facility (SAF) key rings, delivering SSL server authentication.

Like it or not, Cloud computing is a rapidly evolving technology, where the Cloud is integrating increasingly more applications and associated services, delivering cost savings and scalability accordingly.  Of course, for true enterprise class scalability and cost efficiency, arguably the IBM zSeries Mainframe is an ideal platform for Cloud technologies.  Therefore with some slightly modified thinking, organizations can deploy Cloud based solutions and benefit from application promotion benefits, especially with a technology such as CICS.

There is a great presentation named Five Compelling Reasons for Creating a CICS Cloud that provides robust working examples of how to increase application availability, with real-life application development scenarios.

In conclusion, CICS continues to evolve and not only is it the best Enterprise Class Transaction Server, the family of CICS products including its Application Server deliver a 21st Century Cloud computing compatible platform, for the most demanding of business requirements.  Whether considering the IBM defined Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative or the more traditional Reliability, Availability and Serviceability (RAS) attributes of the zSeries Mainframe server, the latest version of CICS facilitates:

  • Rapid Application Development: Agile methodologies for rapid development, irrespective of programming language (E.g. Java, COBOL, C/C++, PL/I, et al).
  • Seamless & Timely Application Deployment: More frequent application updates, minimizing downtime and associated cost, while leveraging from Cloud functionality, to deploy new applications, application enhancements or bug fixes, side-by-side with existing real-life Production workloads.

Are You Ready For z/OS Mobile Workload Pricing (MWP)?

Recently IBM announced Mobile Workload Pricing (MWP) for z/OS which can minimize the impact of mobile workloads on Sub-Capacity license charges, delivering optimized pricing for System z environments extending their workloads to incorporate mobile devices.

MWP only applies to Mainframe customers deploying a zEC12 or zBC12 in their enterprise, as per the AWLC or AEWLC (AKA Advanced/Entry Workload License Charges) metric; MWP is also extended if a zEC12 or zBC12 enterprise is deploying a z196 or z114 via the AWLC or AEWLC metric.

The primary consideration for MWP is determining how a Mainframe customer can comply with the tracking requirements for mobile workloads.  On the plus side, MWP does not require an isolation of mobile workload transactions in separate LPARs, using enhanced reporting for software pricing.  This is a major step forward when compared with Integrated Workload Pricing (IWP), which ideally requires large LPAR container structures, minimizing costs for WebSphere workloads, applying to the CICS, IMS and WebSphere MLC software products.  Conversely, MWP includes DB2 in the list of eligible software products for cost reduction.

If a Mainframe customer is eligible for MWP pricing they will then need to utilize the Mobile Workload Reporting Tool (MWRT), which is analogous to the original Sub-Capacity Reporting Tool (SCRT).  This is an either/or situation, the Mainframe customer only submits MCRT reports to IBM if they’re MWP eligible, or the status quo remains, where non-MWP Mainframe customers continue to submit SCRT reports.

The Mainframe customer must track and report General Purpose (GP) CPU time for mobile transactions, reporting those values in a pre-defined format to IBM each month to benefit from MWP.  MWRT utilizes reported mobile transaction data to adjust the Rolling 4 Hour Average (R4HA) Sub-Capacity software eligible MSUs, with LPAR granularity.  Optimizing mobile transactions impact for peak LPAR MSU values delivers benefit when higher mobile transaction volumes generate MSU resource usage peaks (Workload Spikes).

MWRT calculates the R4HA for mobile transaction GP MSU resource usage, subtracting 60% of those values from the traditional Sub-Capacity software eligible MSU metric, with LPAR granularity, for each and every reporting hour.  The software program values for the same hour are aggregated for all Sub-Capacity eligible LPARs, deriving an adjusted Sub-Capacity value for each reporting hour.  Therefore MWRT determines the billable MSU peak for a given MLC software program on a CPC using the adjusted MSU values.

Most committed zSeries Mainframe customers will be deploying CICS, DB2 and WebSphere software, while IT trends dictate that mobile device usage (I.E. Smartphone, Tablet, et al) is increasing.  Therefore most z/OS applications that require such mobile access have evolved accordingly over time.  Therefore it seems to be one of those “No Brainer” type scenarios, where the Mainframe user should plan to benefit from MWP, either as they upgrade to the latest zSeries technology, namely zEC12 or zBC12, or immediately if already deploying a zEC12 or zBC12 server.

The only minor consideration is a requirement for the zEC12 or zBC12 customer to engage their local IBM account team, to determine what data they need to report on mobile transactions for MWP consideration.  This one off task will deliver optimized WLC pricing forever more.

Of course IBM are encouraging customers to consider the Mainframe for new applications, driven by mobile transaction requirements.  Equally, there is no reason why longer term Mainframe customers can’t benefit from MWP, benefitting from reduced MLC costs, a major consideration of Mainframe TCO.

COBOL – A Viable Programming Language?

For the last twenty years or so I have encountered many scenarios where Mainframe users consider migration to a Distributed Systems (E.g. Wintel, UNIX/Linux, et al) platform, where more often than not the primary reasons seems to be “green screen” and/or “COBOL is a declining legacy language” based.

Going back to basics, COBOL is a Common Business Oriented Language, although the naysayers might say COBOL is a Completely Obsolete Business Oriented Language; we will perhaps try to be more dispassionate in this discussion…

Industry Analysts have stated that there are ~220 Billion lines of COBOL code and ~100,000 programmers and that COBOL applications process ~80% of business transactions daily, and that there are ~200 times more COBOL transactions processed daily, when compared with Google searches!  A lot of numbers and statistics, but seemingly COBOL is still widely used and accepted.  Even from a new development viewpoint, ~5 Billion lines of COBOL code per annum (~15% of Annual Global Development) is stated, suggesting that COBOL is not in any way obsolete or legacy, so why is COBOL perceived by some in a dubious manner?

Maybe because COBOL was introduced in 1959 and primarily it is deployed on the Mainframe, and so anything that is 50+ years old and has an association with the Mainframe just has to be dubious, doesn’t it?  Of course not, as this arguably “pioneering” or at least one of the first “widely deployed” programming languages allowed many global and significant businesses grow, in tandem with the IBM Mainframe platform, automating and streamlining business processes, increasing productivity and so on.  So depending on your viewpoint, COBOL was either in the right place at the right time, stimulating the Data Processing (DP) and Information Technology (IT) revolution, or COBOL just got lucky, it was “Hobson’s Choice”…

Although there have been several iterations of COBOL standards (I.E. COBOL-68, COBOL-74, COBOL-85), primarily associated with the American National Standards Institute (ANSI) and more latterly COBOL 2002 (ISO), a COBOL program that was written and compiled on an IBM Mainframe several decades ago, will most likely still run on the latest generation IBM Mainframe.  Put another way, its backwards compatibility ability has been significant, and although there were some migration considerations associated with the Language Environment (LE), the original COBOL Application Development investment has generated a readily usable Return On Investment (ROI) over and over again.  How true is this for other programming languages and computing platforms?  For the avoidance of doubt, a COBOL program that was written in 16-bit, can still run today on a 64-bit platform, and with a modicum of evolution, fully exploit the latest functionality and 64-bit performance, with minimal fuss.  While how many revolutionary or significant upgrades have been required for Commercial Off The Shelf (COTS) software and associated bespoke application development code, to upgrade non-Mainframe platforms from 16-32-64-bit?

So, is COBOL a viable programming language of the future?  One must draw one’s own conclusions, but we can look to recent functional enhancements and statements of direction from an IBM Mainframe viewpoint.

In recent years IBM have actually increased the number of COBOL R&D personnel by a factor of ~100%, while increasing allocated investment, commitment and interest accordingly.  This observation more than any other, suggests that at least from an IBM Mainframe viewpoint, COBOL is an important function.

From a technical function viewpoint, the realm of possibility exists with COBOL, interacting with all 21st century programming and function techniques, dismissing the notion that COBOL can only be considered as a traditional/legacy option for CICS-Batch applications and associated “green screen” environments, for example:

  • Support for CICS integrated translator
  • Support for latest SQL data types in syntax via DB2
  • Support for Java interoperability via object-oriented COBOL syntax
  • Support access for WebSphere enterprise beans
  • Support for Java SDK
  • Support for XML high speed parsing and validation (UTF-8, UTF-16 & various EBCDIC codepages)

From a strategic statement of direction viewpoint, IBM have declared the following major notable activities:

  • Performance and resource utilization optimization, reducing TCO accordingly
  • Improved middleware (I.E. CICS, DB2, IMS, WebSphere) programmability and problem determination
  • Improved capabilities (E.g. XML, Java, et al) for modernizing & creating business critical applications
  • Improved programmer (E.g. Usability and Problem Determination) productivity
  • Source and load (I.E. recompile not required) compatibility (E.g. old programs can call new and vice versa)

Even for those occasions where the IBM Mainframe platform might be decommissioned, COBOL can still be processed on alternative platforms via code migration techniques such as Micro Focus, where such functions and services can be Cloud based.  However, once again, isn’t the IBM Mainframe the ultimate “Cloud” platform, which has arguably been the case “forever thus”?

One must draw one’s own conclusions as to why the Mainframe platform and/or COBOL applications are often considered for replacement via migration, when the Mainframe platform is both strategic and cost efficient.  As with any technology decision, there is no “one size fits all” solution, but perhaps a little education can go a long way, and at least the acceptance that seeming “legacy” technologies are strategic and viable.