How to Connect Mobile Workloads to System z

Despite potential security concerns, primarily data encryption and multiple-factor authentication related, mobile transactions continue to increase their share of the market, accounting for up to half of online transactions. Mobile payments now account for 30%+ of all global online transactions as of Q3 2015, continuing the upward trend experienced for the last several years. Although there are global differences in mobile transaction adoption, all global locations are experiencing rapid growth in mobile transaction adoption. Furthermore, as a general rule of thumb, seemingly ~66% of mobile transactions originate from a smartphone, a ~2:1 ratio when compared with tablet devices. Therefore it seems highly probable that smartphone originated mobile transactions will become the de facto standard for online transactions…

For System z users, the majority of their TCO continues to be IBM MLC software related and seemingly the realm of possibility exists for retail operations to reduce IBM MLC TCO as a result of modernizing their business for this mobile transaction phenomenon. Recognizing the security, scalability and transaction ability of the System z platform, why wouldn’t it be the ideal platform for mobile transactions? Furthermore, deploying mobile workloads that can take advantage of modern low cost System z pricing metrics, namely System z Collocated Application Pricing (zCAP) and Mobile Workload Pricing (MWP) for z/OS, could substantially reduce IBM MLC TCO. In theory, existing legacy applications might become somewhat static in nature, as mobile transactions replace existing traditional transaction mechanisms. Therefore the cost per business transaction reduces, potentially significantly.

So, just how easy is it to connect mobile transactions to the System z platform?

z/OS Connect is a software function engineered to leverage from the Liberty Profile for z/OS, acting as an enabler of connectivity between the mobile environment (client) and the System z platform (host). Put another way, z/OS Connect exposes System z assets for mobile and cloud workloads. Quite simply z/OS Connect delivers JSON (JavaScript Object Notation) and REST (REpresentational State Transfer) functionality to leverage from existing z/OS subsystems (E.g. CICS, IMS, Batch, et al). These traditional System z transaction systems (E.g. CICS, IMS) often integrated with DB2, are repositories for vast amounts of business transactions and data. There is no incremental cost for z/OS Connect usage, being packaged with WebSphere Application Server (WAS), CICS and IMS software products.

z/OS Connect provides a discovery function allowing developers to query services that have been configured for a z/OS Connect instance. A single z/OS Connect REST call returns a list of all configured services and another REST call will return the details of a given service. Importantly, developers only need to know the REST API service and associated JSON requirements to achieve this mobile device to System z interoperability; they do not need to know the underlying CICS or IMS subsystem. z/OS Connect incorporates a data conversion function that maps JSON to the host (I.E. CICS or IMS) data format requirement. Put really simply, when a request is received, z/OS Connect converts the data for CICS or IMS subsystem processing and when a response is produced, z/OS Connect converts the data back to JSON.

From a security viewpoint, standard or bespoke code can be used for control before and after a request is processed, identified as an interceptor. For Security, the calling user identity can be checked against defined roles, determining if they have authority to use z/OS Connect or the configured service. On z/OS the security interface is SAF, supplemented by an External Security Manager (ESM), namely ACF2, RACF or TopSecret. For Audit, request information can be logged via SMF for later analysis. Information about each request is logged, including timestamp, bytes processed, response time and USERID.

To summarize, z/OS Connect is designed to simplify the integration of mobile systems and z/OS assets. Delivering a consistent front-end interface for mobile systems via REST and JSON, z/OS Connect seamlessly integrates with WAS, CICS and IMS subsystems for data processing. In theory, a developer could code a mobile workload application, with no knowledge of the System z platform.

In conclusion, it seems we have to accept the adoption of the smartphone device for processing an ever increasing amount of online transactions. The realm of possibility exists that online transactions (click) will continue to displace traditional and legacy (brick) transactions. Therefore as businesses evolve to accommodate mobile transactions, they should strive to reduce their IBM MLC TCO accordingly, delivering JSON and REST applications that can leverage from optimal cost z/OS MLC software, primarily via the zCAP and MWP pricing mechanisms. z/OS Connect is one such option that simplifies the timely delivery of mobile workload applications.

System z: Optimizing DASD I/O Subsystem Performance

Historically there was a very simple synergy between the IBM S/370 Mainframe and its supporting disk I/O (DASD) subsystem, allowing for Mainframe host to physical and logical disk device (I.E. 3390) connectivity. The analysis and tuning of this I/O subsystem has always been and continues to be supported by the SMF Type 7n records via IBM RMF and the BMC CMF alternative. However, over the years, major advances in DASD subsystems and the System z Mainframe server have delivered many layers of technology resources (E.g. Cache, Memory, FICON Channels, RAID Storage, Proprietary Microcode, et al) and this has introduced complexities into highlighting DASD I/O subsystem performance problems.

The focus of technology based metrics (E.g. I/O Rate Response Time, I/O MB/S Bandwidth, et al) have also been complemented with more meaningful business focussed Service Level Agreements (SLA). Therefore today’s System z I/O Performance Analyst must gather and act upon proactive meaningful information from the ever-increasing amounts of performance data available. Put another way, too much data can deliver not enough information! As previously stated, it was forever thus, RMF and CMF have always collected the requisite performance data available and arguably no other data source is required (E.g. OMEGAMON/TMON/SYSVIEW Performance Monitor, SAS/MXG/MICS/WPS Performance Database). RMF/CMF is the ideal data source for thorough and timely System z I/O performance management, where intelligent analytics and expert knowledge are required to present this “Golden Record”.

However, today’s System z Support Teams need simple and timely presentation of the data, highlighting potential challenges, graphically presented for their Management, allowing for simple tracking of SLA agreements and technology changes (I.E. Software/Hardware Upgrades).

Additionally, Workload Manager (WLM) can control non-paging queued DASD I/O requests, based upon device busy conditional processing. Therefore the z/OS system can manage I/O priorities in a Sysplex, based on WLM service class goals. WLM dynamically adjusts the I/O priority based on service class goal performance and whether a DASD device can influence the overall performance objectives. For obvious reasons, this WLM function does not micro-manage I/O priorities, only changing a service class period’s I/O priority infrequently. WLM is deployed by many System z users to assist in the automated management of system resources (E.g. CPU, Memory, I/O, et al), based upon Service Level goals.

From a DASD subsystem technology viewpoint, there is no longer an obvious one-one direct connection between the Mainframe host and DASD device. An increasing number of technological advances, both microcode and hardware (E.g. Memory, Fibre Channel, Function Assist Processing, et al) have diminished the requirement for data access directly from the physical device. Put another way, in today’s world of System z servers with multiple cache level CPU chips (I.E. Relative Nest Intensity), massive and multiple processor memory resources (I.E. z13 @ 10 TB Memory), high bandwidth Fibre Channel (I.E. FICON, zHPF) subsystem and a hierarchy of DASD memory (I.E. SSD/Flash, Cache), it’s not uncommon to consider an I/O that requires physical device access as a problem! Finally and most importantly, from a DASD subsystem viewpoint, each of the recognized System z DASD providers, EMC (Symmetrix VMAX), HDS (VSP G1000) and IBM (DS8870) have highly proprietary DASD subsystems that provide z/OS plug compatibility, but deliver overall I/O performance using their own unique architecture and internal algorithms.

Of course, an over configured hardware environment will deliver a poor TCO, while an under configured environment will manifest in SLA issues and bad user experiences, where the middle-ground always delivers the optimal environment. Resource optimization always demands proactive day-to-day management, from an internal and indeed external communication viewpoint. With the highly proprietary design features of the IHV DASD subsystems, whether EMC, HDS or IBM, having the right information and identifying the precise problem, simplifies the communication process with the IHV. Such communication might highlight a resource under provision (E.g. Memory Capacity), a subsystem setting tweak requirement, either host or subsystem based, or indeed a hardware failure. In today’s world, these issues need to be fixed in minutes or hours, not days or weeks.

Therefore, where does today’s System z I/O Performance Analyst start to collect the required information to safeguard that their DASD subsystem is optimized, both from a capacity and performance viewpoint?

A simplistic viewpoint of an I/O health-check should consider the following:

  • Service Level Agreements (SLA): Are overall objectives being delivered or missed?
  • User Experience: Are users (customers) complaining of poor service or response times?
  • I/O Metric Performance: Are there obvious signs of abnormal performance statistics?

Several decades ago, an overall I/O health check might have been a periodic (E.g. Weekly or longer) activity, whereas today it’s undoubtedly a Business As Usual (BAU) and 24*7 activity. Therefore a fully automated solution is required, built upon the tried and tested System z performance fundamentals, namely RMF or CMF. The ideal solution will perform analytics based data reduction, presenting the right information, at the right time, allowing for intelligent business based communication, both internally, to customers and end users from an SLA viewpoint, and externally, with IHV DASD suppliers, safeguarding optimal performance and TCO.

EADM (Easy Analyze DASD Mainframe) is a solution from Technical Storage that performs automated performance analysis of the z/OS I/O subsystem, delivering predictive analytics for better storage capacity planning and performance measurement. The Technical Storage EADM architects have in excess of 40 years IBM Mainframe experience, specializing in the I/O subsystem, and so it’s no surprise that EADM delivers expert and timely knowledge via an easy-to-use solution.

EADM is an easy-to-install and easy-to-use plug-and-play solution that has no proprietary considerations, requiring no additional System z resource (E.g. CPU, Memory, DASD, et al) requirements. Installed on Microsoft server platforms, EADM is easily virtualized via VMware, Hyper-V, et al, requiring no target database for performance data storage. EADM performs a daily health check of the entire System z disk subsystem. EADM works around the clock, delivering customized and automatic user friendly GUI type reports. For today’s System z technician, the open and IP architecture base of EADM allows for secure remote access via Mobile, Tablet or Laptop devices, as and when required.

Operations and performance teams are alerted as soon as performance variances occur, typically in minutes, assisting in the identification of underlying root problems, causing changes in system behaviour. Incorporating intelligent and meaningful I/O performance indicators, with drill-down and zoom-in ability, storage technicians can determine if the problem is temporary, permanent, local or global. By simplifying the data reduction process (E.g. RMF/CMF data from numerous LPAR/Sysplex environments), EADM safeguards that the internal technical team can efficiently manage their ever increasingly complex and large DASD environment, for intelligent and timely communications with internal business teams and external suppliers alike.

EADM simplifies the System z I/O subsystem capacity and performance management process, delivering expert reports and timely historical analysis, for example:

  • Automatic daily (24 Hour) analysis of Sysplex wide workload (On-Line TP & Batch) I/O response times
  • Systematic intelligent alerts of early performance variances with exact occurrence time indicators
  • Identification of I/O performance hot-spots with DASD volume and data set level granularity
  • Performance trending at DFSMS Storage Group, Subsystem LCU and DASD volume level
  • DR (E.g. PPRC) simulations to prevent data loss and forecast Data Centre failover scenarios
  • I/O subsystem WLM indicators to determine exactly what impacts performance objectives
  • Full FICON channels and zHPF analysis, incorporating typical I/O throughput indicators
  • HyperPAV and associated LCU indicators to easily balance volumes, optimizing PAV alias allocation
  • Performance monitoring and balancing via intelligent LCU, SSID and I/O analytics
  • DASD capacity usage via DCOLLECT data, comparing assigned vs. allocated vs. actual disk utilization
  • EADM supports entry-level several LPAR and complex multiple CPC/LPAR System z configurations

A well provisioned and performing System z I/O subsystem is of vital importance for safeguarding today’s ever increasing storage requirements of mission critical business applications. A poorly performing I/O subsystem will generate unnecessary and extra CPU overhead, with potential and tangible TCO impact, in conjunction with potential business impact. Although the advances of the System z server and underlying DASD I/O subsystem can compensate for many application code or data placement issues, the fundamental concepts of analysing and tuning the I/O subsystem remain.

Therefore the savvy and proactive System z customer will safeguard that they find a solution to deliver optimal DASD I/O performance. Without doubt, such an analysis could be performed by a highly-skilled individual, but today’s 21st Century world demands a hybrid of technical and commercial skills. Therefore a solution that incorporates the diagnostic knowledge of the most highly trained technician, performs intelligent analytics on a plethora of Sysplex wide performance data sources and presents the information required, is one that will deliver benefit each and every day. EADM is an example of such a solution, delivering demonstrable System z TCO optimization benefits, while safeguarding a short-term ROI, with simple deployment and resource utilization attributes.

IBM Mainframe Capacity Planning & Software Cost Control Interaction?

The cost of IBM Mainframe software is an extensive subject matter that is multi-faceted and can generate much discussion. The importance of optimizing Mainframe software costs is without doubt, as it is the most significant Mainframe TCO component, having increased from ~25-50%+ of overall expenditure in the last decade or so. Conversely Mainframe server hardware costs have largely stabilized at ~15-25% of TCO in the same time period. However, Mainframe Capacity Planning activities have evolved over the last several decades or so, where hardware costs were the primary concern and the number of IBM Mainframe software pricing mechanisms was limited. Of course, in the last decade or so, IBM Mainframe software pricing mechanisms have evolved, with a plethora of acronyms, ESSO, ELA, IPLA, OIO, PSLC, WLC, VWLC, AWLC, IWP, naming but a few!

Can each and every IBM Mainframe user clearly articulate their Mainframe Capacity Planning and Software Cost Control policies, and which person in their organization performs these very important roles? Put another way, not forgetting Software Asset Management (SAM), should there be a Software Cost Control specialist for IBM Mainframe Data Centres…

If we consider the traditional Mainframe Capacity Planning role, put very simply, this process typically produces a 3-5 year rolling plan, based upon historical data and future capacity requirements. These requirements can then be modelled with the underlying hardware (E.g. z10, z114/z196, zEC12) server, identifying resource requirements accordingly, namely number of General Processors (GPs), Specialty Engines (E.g. zIIP, zAAP, IFL), Memory, Channels, et al. Previously, up until ~2005, customer requirements would be articulated to IBM, cross-referenced with LSPR (Large System Performance Reference) and an optimum hardware configuration derived. Since ~2005, IBM made their zPCR (Processor Capacity Reference) tool Generally Available, allowing the Mainframe customer to “more accurately” capacity plan for IBM zSeries servers.

Other enhancements to more accurately determine the ideal zSeries server include sizing based on actual customer usage data generated by the CPU MF facility introduced with the z10 server. CPU MF delivers a refinement when compared with LSPR, refining the zPCR process with real life customer usage data, compared to the standard simulated LSPR workloads.

In summary, the Mainframe Capacity Planning process has evolved to include new tools and data to refine the process, but primarily, the process remains the same, size the hardware based upon historical data and future business requirements. However, what about Mainframe software usage and therefore cost interaction?

Each and every IBM Mainframe user relies heavily on the IBM Operating System (I.E. z/OS, z/VM, z/VSE, zLinux, et al) and primary subsystems (I.E. CICS, DB2, MQ, IMS, et al). Some Mainframe users might deploy alternative database and transaction processing (TP) solutions, but a significant amount of Mainframe software cost is for IBM software products. In the late-1990’s, IBM introduced their PSLC (Parallel Sysplex License Charges), which offered lower aggregate (MSU) pricing for major IBM software products, based upon an eligible configuration (E.g. Resource Sharing). This pricing mechanism had no impact on software cost control, in fact quite the opposite; it was a significant cost benefit to implement PSLC!

In 2000 IBM announced Workload License Charges (WLC), which allowed users to pay for software based upon the workload size, as opposed to the capacity of the machine; thus the first signs of sub-capacity pricing. In 2001, the ability to deploy IBM eligible software on a “pay for what you use” basis was possible, as per the Variable Workload License Charge (VWLC) mechanism. Put very simply, a Rolling 4 Hour Average (R4HA) MSU metric applies for eligible IBM software products, where software is charged based upon the peak MSU usage during a calendar month. The Mainframe user pays for VWLC software based upon the R4HA or Defined Capacity (Sub-Capacity vis-à-vis Soft Capping), whichever is lowest.

From this point forward, and for the avoidance of doubt, for the last 10 years or so, there has been a mandatory requirement to consider the impact of IBM WLC software costs, when performing the Mainframe Capacity Planning activity. One must draw one’s own conclusions as to whether each and every Mainframe user has the skills to know the intricacies of the various software (E.g. IPLA, OIO, PSLC, WLC, et al) pricing models, when upgrading their zSeries server.

With the IBM Mainframe Charter in 2003, IBM stated that they would deliver a ~10% technology dividend benefit, loosely meaning that for each new Mainframe technology model (I.E. z9, z10), a lower MSU rating of 10% applied for the for the same system capacity level, when compared with the previous technology. Put another way, a potential ~10% software cost reduction for executing the same workload on a newer technology IBM Mainframe; so encouraging users to upgrade. However, the ~10% software cost reduction is subjective, because a higher installed MSU capacity dictates lower per MSU software costs…

With the introduction of the z196 and z114 Mainframe servers the technology dividend was delivered in the form of a new software license charge, AWLC (Advanced Workload License Charges), where lower software costs only applied if this new pricing model was deployed. A similar story for the zEC12 server, the AWLC pricing model is required to benefit from the lower software costs! If these software pricing evolutions were not enough, in 2011 IBM introduced the Integrated Workload Pricing (IWP) mechanism, offering potential for lower software pricing based upon workload type, namely a WebSphere eligible workload. Finally, and as previously alluded to, as MSU capacity increases, the related cost per MSU for software decreases, so there are many IBM software pricing mechanisms to consider when adding Mainframe CPU capacity. So once again, who is the IBM Mainframe Software Cost Control specialist in your organization?

For sure, each and every IBM Mainframe user will engage their IBM account team as and when they plan a Mainframe upgrade process, but how much “customer thinking is outsourced to IBM” during this process? Wouldn’t it be good if there was an internal “checks & balances” or due diligence activity that could verify and refine the Mainframe Capacity Plan with IBM software cost control intelligence?

Having travelled and worked in Europe for 20+ years, I know my peers, colleagues and friends that I have encountered can concur with my next observation. The English and Americans might come up with a good idea and perhaps product, the French are most likely to test that product to destruction and identify numerous new features, while the Germans will write the ultimate technical manual…

zCost Management are a French company that specializes in cost optimization services and solutions for the IBM Mainframe. From an IBM Mainframe Capacity Planning & Software Cost Control Interaction viewpoint, they have developed their CCP-Tool (Capacity and Cost Planning) software solution. This software product bridges the gap between Mainframe Capacity Planning for hardware and the impact on associated IBM software (E.g. WLC, IPLA, et al) costs.

CCP-Tool facilitates medium-term (E.g. 3-5 year) Mainframe Capacity Planning by controlling Monthly License Charges (MLC) evolution, generating cost control policies, optimizing zSeries (E.g. PR/SM) resource sharing, delivering financial management via IBM Mainframe software cost control activity. CCP-Tool integrates with existing data and activities, using SMF Type 70 & 89 records, defining events (I.E. Capacity Requirements, Workload Moves) in the plan, simulating many options, delivering your final capacity plan and periodically (I.E. Quarterly) reviewing and revising the plan. Most importantly, CCP-Tool deploys many algorithms and techniques aligned to IBM software pricing mechanisms, especially WLC and R4HA related.

Therefore CCP-Tool delivers a financial management framework via a medium-term Capacity Plan with associated software cost control and zSeries (E.g. PR/SM) resource policies. This enables a balanced viewpoint of future Data Centre cost configurations from both a hardware and related IBM Mainframe software viewpoint. Moreover, for those IBM Mainframe users that don’t necessarily have the skills to perform this level of Mainframe cost control, CCP-Tool delivers a low cost solution to empower the Mainframe customer to engage IBM on an equal footing, at least from a reporting viewpoint. Similarly, for those Mainframe users with good IBM Mainframe software cost control skills, CCP-Tool offers a “checks & balances” viewpoint, delivering that all important due diligence sanity check! Quite simply, CCP-Tool simplifies the process of reconciling the optimal configuration both from an IBM Mainframe hardware and related software viewpoint.

Without doubt, if a Mainframe user still deploys a hardware centric viewpoint of the capacity planning activity, without considering the numerous intricacies of IBM Mainframe software pricing, in most cases, this could be a significant cost oversight. Put very simply, a low-end IBM Mainframe user of ~150 MSU (1,000 MIPS) might spend ~£1,000,000 per annum, just for a minimal configuration of z/OS, CICS, COBOL and DB2 software, so one must draw one’s own conclusions regarding the potential cost savings, when deploying the optimal zSeries hardware and associated IBM software configuration. I paraphrase Oscar Wilde:

“The definition of a cynic is someone that knows the price of everything, and the value of nothing!”

So, let’s reprise. You have performed your Mainframe Capacity Planning activity, considered historical SMF data for CPU usage, maybe including the R4HA metric, factored in additional new and growth business requirements, refined the capacity plan by using the zPCR tool, perhaps with data input from CPU MF and you now have identified your optimum zSeries Mainframe server?

Maybe you should think again, because the numerous IBM MLC software pricing mechanisms could impact your tried and tested Mainframe CPU hardware planning process. Firstly, for MLC software, the unit cost per MSU reduces, as the installed MSU capacity increases. In simple terms, this encourages the use of “large container” processing entities, LPARs and CPCs. Other AWLC and IWP related considerations further encourages the use of major subsystems (E.g. CICS, DB2, WebSphere, IMS) in larger MSU capacity LPARs and CPCs to benefit from the lowest unit cost per MSU. Additionally, do you really need to run all software on all processing entities? For example, programming languages (E.g. COBOL, PL/I, HLASM, et al) are not necessarily required in all environments (E.g. Test, Development, Production, et al). It is not uncommon for compile and link-edit functions to be processed in Development environments only, while only run-time libraries are required for Production. These “what if” scenarios generated by the numerous IBM MLC software pricing mechanisms must be considered, ideally by an internal resource, with the requisite skills and experience.

Today, who is performing this Mainframe Software Cost Control in your organization? Is it an internal resource with the requisite skills, an independent 3rd party, IBM or nobody? One must draw one’s own conclusions as to whether any of these parties who could perform this vital activity has a vested interest or not, and thus a potential conflict of interests…