How Can We Energize Our Emerging zCommunity?

No doubt we have all experienced that most things in life and business are cyclical, hence the terms déjà vu, those who cannot remember the past are condemned to repeat it, et al…

For System z, with the glass half-full, there are encouraging signs of pragmatic and collaborative executive leadership from the supplier ecosystem; for example, BMC, Compuware and IBM collaborating on a Standard Software Product Install Methodology For All Vendors. With the glass half-empty, even though there are proven statistics to demonstrate the penetration of System z in global large organizations, there are still some misplaced legacy perceptions associated with System z, from significant executive leaders.

Just as the IBM Mainframe automated business processes more than several decades ago, introducing IT into the business workplace forever more, we’re currently undergoing another IT revolution. Quite simply, an exponential growth in data, typically associated with Cloud, Analytic, Mobile & Social technologies. With this in mind, we should always be mindful that an IT solution should solve a business challenge and/or provide value for a business requirement. Therefore, the business themselves are best placed to articulate the framework and ultimate size and shape of solutions delivered by the vendor community.

The IBM Mainframe environment has always benefitted from User Groups that conceptually represent the customer, articulating requirements to IBM for future IBM Mainframe enhancement. For the avoidance of doubt, SHARE in The USA, celebrating its 60th anniversary in 2015, with SHARE Europe, the forerunner to GSE, being founded in 1959. These groups are the ideal forums for collecting and articulating user requirements to IBM, for IBM Mainframe and current System z evolution. Without doubt, there has been a resurgence in support for SHARE USA and GSE events in the last decade or so, but from a dispassionate viewpoint, how many IBM Mainframe customers are members of these User Groups?

As previously referenced, the executive leadership of major System z Mainframe vendors are demonstrating a willingness to collaborate. Perhaps now is an ideal time for the System z Mainframe customer to articulate their requirements to the major System z Mainframe vendors?

My admiration for those volunteers that contribute their time, knowledge and passion to User Groups such as SHARE and GSE is without doubt. I’m also positive that these User Groups would welcome the opportunity to represent a larger number of System z end users, which would no doubt generate more end user presentations at conferences, supplemented by generic and business orientated user requirements for System z ecosystem vendors to consider. This can only happen if the end users of the IBM System z Mainframe platform embrace this opportunity to shape the future of the System z Mainframe, as it rapidly evolves, both in technological advancement and an emerging willingness for collaboration from vendors.

Having worked with IBM Mainframes for over 30 years, I’m no longer surprised about the quality and professionalism of personnel I encounter at user sites. A granularity of knowledge can sometimes be applied, with all-rounders demonstrating savvy technical and commercial knowledge at small capacity installations and Subject Matter Experts (SME), typically in larger capacity installations, demonstrating level 3 diagnostic capability. In an ideal world, the executive leadership at these System z Mainframe user sites should also participate in a forum of like-minded peers, allowing them to embrace and value the System z platform. There are certainly such Senior Management streams at SHARE and GSE events, but once again, if the System z end user isn’t a User Group member and/or doesn’t attend these events…

In our real life domestic environments, we can lobby our local government official (Member Of Congress/Parliament, MC/MP, et al), allowing for generic or specific representation for all people alike. In theory, in an evolving IT world, there is no reason why a System z Mainframe user can’t lobby a vendor for a user requirement. As always, no one of us, is as good as all of us! Therefore just as System z Mainframe vendors are collaborating, as and when practicable, now is the time for the System z Mainframe end users to collaborate, no matter how large or small, for the benefit of all. Given that the forums for collaboration already exist, for example SHARE USA and GSE, System z end users can easily leverage from these User Groups, to generate a coherent and notable voice.

Wouldn’t it be fantastic if 80%+ of System z Mainframe end users were User Group (E.g. SHARE, GSE) members and several of their technicians and one senior manager attended their local annual conference? The cost, minimal, the value, arguably priceless!

From my own viewpoint, I have recent real-life experience of engaging a major System z vendor, with a commercial user requirement collected from tens of smaller capacity Mainframe users, where said submission is being considered. This is perhaps a brave new world…

zIIP Into The Future: Mainframe Specialty Engines Evolution

Sometimes we might lose sight that change can be evolutionary as opposed to revolutionary and this certainly applies to IBM Mainframe specialty engines, for example:

  • 1997: Internal Coupling Facility (ICF)
  • 2000: Integrated Facility for Linux (IFL)
  • 2004: System z Application Assist Processor (zAAP)
  • 2006: System z Integrated Information Processor (zIIP)

To assist with lower IBM software pricing, arguably the ICF offering became the de facto standard for a Mainframe user to be considered “actively coupled”.  Therefore deploying two or more eligible IBM Mainframes, physically attached via coupling links to a common Coupling Facility (I.E. ICF).

The Integrated Facility for Linux (IFL) is a processor dedicated to Linux workloads on IBM System z servers.  The IFL is supported by the z/VM virtualization software and the Linux operating system.  Most customers have at least dabbled into this technology, while some are using this technology extensively, primarily for distributed server consolidation.

Somehow the zAAP specialty engine has become the “black sheep” of the family where the current zEC12 and zBC12 are planned to be the last System z servers to offer support for zAAP specialty engine processors.

As of z/OS V1.11, functionality was delivered enabling zAAP eligible workloads to run on zIIP engines.  This function allowed both zIIP & zAAP-eligible workloads to process on zIIP.  This capability was ideal for customers with insufficient zAAP or zIIP eligible workload to justify a specialty engine.  Whereas the combined eligible workloads increase the ROI metrics for zIIP deployment.  The zAAP specialty engine is primarily targeted for web-based applications and SOA-based technologies, namely Java and XML.

So for z/OS type workloads, we must “zIIP Into The Future”…

Sometimes we need to look at the big picture, where the IBM organization is comprised of many business units, including the Mainframe business unit.  The Mainframe business unit itself contains many groups, including, but not limited to, the Hardware and Software groups.

As we all know, z/OS software TCO is significant and so this translates into higher revenues for the IBM Mainframe software group; but what about the IBM Mainframe hardware group?  Perhaps the specialty engines, primarily in the form of zIIP will generate revenue stream for this business unit.  Along with the introduction of zBC12 & zEC12 servers, IBM increased the zIIP to General Purpose (CP) engines ratio to 2:1; meaning you can have 2 zIIP specialty engines with the same capacity as an associated CP engine.  Previously the maximum ratio allowed was 1:1 (Specialty:CP).

What workloads are zIIP eligible?  Over time and since 2006 the amount of workload that is zIIP eligible has increased, primarily due to software development and upgrade efforts of IBM and the 3rd party ISV community:

  • DB2 for z/OS exploits the zIIP capability for portions of eligible data serving, pureXML and utility workloads
  • Other 3rd party DBMS solutions, including ADABAS & IDMS offload workload to zIIP
  • Most Systems Management tools (E.g. OMEGAMON, MAINVIEW, RMF, SYSVIEW, et al)
  • z/OS XML System Services for eligible XML validating and non-validating workloads
  • Other z/OS functions including /OS Communications Server, Global Mirror, CIM Server, et al

What are the benefits of deploying a zIIP specialty engine?

  • Lower acquisition and maintenance costs, when compared with general CP
  • zIIP engines run at full rated CP speed
  • Offload work (CPU) from General Purpose (CP) engines
  • No cost for Sub-Capacity eligible IBM software (I.E. WLC)

So, one must draw one’s own conclusions, but seemingly the deployment of zIIP engines is a “no brainer”!

Hmmm, once again, evolution is a good thing and the zIIP engine has an 8 year history and its predecessor zAAP, a 10 year history.  This ~10 year period has allowed for user experiences and IBM function developments to evolve a more stable and rounded offering and as previously stated, a product for the IBM Mainframe Hardware group to focus upon.

From a customer viewpoint, zIIP deployment requires a Capacity Planning evolution, which should be reasonably straightforward.  The big difference is the CP to zIIP offload consideration and some of the lessons learned include:

  • Software costs – Multiple-Processors; CP to zIIP Offload Rate; zIIP utilization
  • Hardware costs – Installed Books (total MSU/MIPS capacity); Additional LPAR(s)
  • Peak CPU utilization – Safeguard that zIIP exploitation reduces peak CPU usage
  • CPU per Transaction – Slight increase in CPU (not necessarily elapsed time) as workload switches from CP to zIIP
  • zIIP utilization – Early experiences indicate ~50% zIIP engine busy is a good number

In conclusion, zIIP deployment has been gradual and evolutionary, but many factors indicate that zIIP is here to stay and it is the future.  Seemingly from an IBM viewpoint, with benefit for the Mainframe Hardware Group in terms of the eradication of the zAAP engine, the increase in CP:zIIP ratio to 2:1 and the associated customer benefits of Sub-Capacity software pricing.  From a customer viewpoint, ignoring these pointers might not be wise, as z/OS software costs are significant and CPU resource requirements keep increasing.  Adding extra zIIP CPU capacity reduces hardware and associated software costs and so this is the “no brainer” observation that can’t be ignored for much longer…

Mainframe ISV Software: Is Continuous Product Improvement Always Evident?

Ken Venturi once said “I don’t believe you have to be better than everybody else.  I believe you have to be better than you ever thought you could be”.

Wouldn’t it be great if every CTO and/or Product manager had this same philosophy for their Mainframe software solution?  One such example I have experienced over the years is (E)JES from Phoenix Software International (PSI).  Of course it’s really important to have Day 1 support for the latest release of Operating System, z/OS 2.1 being the latest example, but what about actually exploiting the latest functionality available with the latest zSeries Mainframe Enterprise Servers and z/OS Operating Systems?

To drive maximum bang from you’re your buck, optimal performance and robust cost optimization can only be possible by recognizing and exploiting the latest Mainframe function ASAP, as and when appropriate.  Furthermore, listening to your customers, analysing their feedback, actively participating in User Organizations such as SHARE, and so on, will all help in continuous product development and innovation.

Here are some of the reasons why (E)JES has succeeded over a 30+ year period, recognizing and exploiting new z/OS function, as and when the updated z/OS is released for General Availability (GA).  Even today, with Version 5.3 supporting z/OS 2.1 as of day 1, (E)JES continues to offer value-added function for the seasoned, inexperienced and in fact, all IBM Mainframe technicians:

  • 64-bit performance optimizations (I.E. MEMLIMIT: above-the-bar) for both (E)JES client and server components, safeguarding minimal z/OS resource usage.
  • Nearly all (E)JES JES subsystem processing routines are eligible for zIIP redirection, delivering software cost savings for all (E)JES users.  Sub-Capacity System z processor users experience improved (E)JES performance because zIIP engines always run at full speed.  This behaviour differs from that of General Purpose CPs, “throttled” with Sub-Capacity deployments.
  • (E)JES code executes faster via its inbuilt High Performance Routine (HPR) facility, specifically developed to make (E)JES code execute faster while accessing data in JES control blocks.  HPRs have a shorter instruction path length than previous coding techniques, avoiding delays in modern z Series CPU instruction pipelines.
  • If High Performance FICON (zHPF) is available, (E)JES uses Transport Mode channel programs for JES Spool I/O.  When zHPF is not available, or when a CAS server performs I/O against the global data set, (E)JES uses the highest-performing Command Mode channel programs currently available.  These channel programs perform I/O significantly faster than “ordinary” channel programs.
  • The use of 24-bit (captured) UCBs puts a strain on the 24-bit virtual storage resource.  The use of ordinary (non-extended) TIOT entries puts a limit on the total number of allocations that can exist simultaneously in an address space.  (E)JES supports and uses 31-bit (uncaptured) UCBs and the extended TIOT (XTIOT) function (I.E. NON_VSAM_XTIOT=YES in DEVSUPxx PARMLIB)
  • (E)JES supports placement of JES spool data sets in the cylinder-managed area of an Extended Address Volume (EAV).  Of course, as of z/OS 1.12, EAV increases 3390 DASD capacity to ~1 TB.
  • (E)JES Pattern Utility Matching uses the SRST hardware instruction.  Empirical measurements show this technique is far faster on modern System z processors than alternatives such as the TRT instruction or “brute force” matching techniques using CLI/CLC.

One of the primary benefits of upgrading IBM z/OS software is the overall system performance benefit and associated cost reduction, but of course, IBM can only deliver the function and ability, while it’s incumbent upon the ISV community to upgrade their software products accordingly.  A key goal for any good ISV software product is to try to provide a value-add in the area of performance.  This has been one of the primary areas of focus for (E)JES since its introduction in 1978. 

Most spool display and management products tend to rely on the most resource-intensive interface available, namely the JES subsystem provided SSI 80.  (E)JES benchmarking tests against the most readily-available JES SSI 80 exploiters demonstrates significant CPU savings when deploying (E)JES.

Software products also need to deliver continuous improvements with regard to usability, presentation and in-built function, increasing user and system administrator productivity.  Without doubt, optimization encompasses not just hardware, but software, services, systems management disciplines and “best practices” that tie it all together.  Here are some of the usability enhancements that (E)JES has incorporated:

  • ISPF users running a 3270 emulator on a programmable workstation can now search IBM Eclipse-based InfoCenters via (E)JES.  Although (E)JES fully supports BookManager format documentation, BookManager READ/MVS is now obsolete, beginning with z/OS 2.1, BookManager softcopy books are no longer delivered by IBM.  IBM has stated that InfoCenters, and eventually KnowledgeCenters, are their strategic direction for online documentation.
  • (E)JES Web is a new, browser-based interface to (E)JES.  The associated RESTful API delivering this web enabled technology provides a framework for the creation of Eclipse plug-ins, mobile applications, and other web services clients.  This facility will provide a “rapid learning” type facility for users (E)JES users, both new and old that might be uncomfortable navigating traditional 3270 interfaces.
  • (E)JES provides a Java Application Programming Interface (API), complementing other in-built APIs for REXX and procedural languages.  By using an (E)JES API, a user can harness the versatility of their preferred programming language to interface and interact with (E)JES.  This support provides an interface to deliver nearly all of the capabilities available to an interactive (E)JES user.
  • (E)JES incorporates context sensitive help function, with point-and-shoot/pop-up dialogs, helping educate users on (E)JES, JES and z/OS while they work.  Users can get pop-up explanations of columns, input choices for unprotected fields, and a list of line commands.  Smart pop-ups explain the contents of certain columns, such as system abend codes.

The latest (E)JES Release Information Manual eloquently details the product enhancements over the last 5 releases or so, providing a good Product Roadmap reference point.

So, whether the ISV software product you deploy has been available for several years or several decades, do you safeguard maximum business benefit for optimal cost by considering:

  • Does the ISV deploy the latest zSeries server (I.E. zBC12, zEC12) for software interoperability and full hardware function exploitation; or an emulation (I.E. zPDT) technique?
  • Does the ISV deliver value-added z/OS related function on Day 1 or even within a year of the latest z/OS release?
  • Does the ISV deliver meaningful function to assist your users deploy said function, while simplifying environment management for system administrators?
  • Does your ISV product optimize cost, with Sub-Capacity pricing in MSU increments, aggregated MSU costs for your entire zSeries Mainframe environment, as opposed to specific workloads (E.g. CPC’s, LPAR’s, et al)?
  • Does your ISV product optimize cost by offloading the majority of its CPU function to zIIP specialty engines, which run at maximum speed, and where software “runs for free”?

Of course, only you can ask and potentially answer these questions during your day-to-day activities of maintaining currency and optimal performance for your Mainframe software portfolio.

Sometimes the hardest questions anybody can ask are the questions they ask themselves, which are never rhetorical questions!  Extracted verbatim from the latest (E)JES Release Information Manual:

Team (E)JES took advantage of the Phoenix Software International zHISR performance analysis product to discover performance “hot spots” in  the (E)JES product.  Sometimes the simplest, least conspicuous piece of code turns out to be a major CPU contributor.  See below for some of the most embarrassing “surprise” hot spots we discovered using zHISR in a z/OS 2.1 LPAR:

  • Over 30% of the CPU used during a Spool Data Browse FIND operation, against a multi-million-line SYSOUT in JES2, turned out to be code that was clearing a record buffer to blanks using MVCL.  This clearing code was eliminated and some minor adjustments were made in other code to compensate for this change.
  • 27% of the CPU used to produce the Activity display in JES2 turned out to be in a routine that manages an internal resource called the “Job Positions Table.”  The algorithm was improved (to work more like its JES3 counterpart) and that routine is no longer a significant CPU contributor.
  • 9% of (E)JES session start-up was a 26-year-old “brute force” prime number generator used to compute the size of a hash table.  That code was totally reworked and now accounts for approximately .02% of session start-up CPU.
  • A 6% performance penalty was observed when sorting a tabular display with a moderate number of rows. The hot spot turned out to be the code that cleared the work area for the sort service to zeros (another MVCL). This overhead was reduced to .04%.

Mea culpa and humility, never a bad thing, but you have to be honest with yourself and ask yourself the right questions!  So going back full circle and quoting Ken Venturi once again, “I don’t believe you have to be better than everybody else.  I believe you have to be better than you ever thought you could be”.  You must draw your own conclusions as to whether such an observation applies to the (E)JES team at Phoenix Software International (PSI)…

Why not ask them yourself?  Ed Jaffe, the (E)JES CTO will be available at the forthcoming UK GSE Annual Conference, 5-6 November 2013, speaking about (E)JES System Management Software: More With Less For Less, For The z/OS Mainframe and z/OS 2.1 User Experiences.

Application Performance Tuning – Why Bother?

With older generations of Mainframe Operating Systems, certainly MVS/XA and perhaps MVS/ESA, application performance tuning was a necessity, not an afterthought.  Quite simply, the cost of Mainframe resources, namely CPU, memory and disk, dictated that your mission critical business application might not perform to business requirements, unless you tuned your programming code.  Programmers, both of the system and application variety understood the bits and bytes of available programming languages (E.g. ASM, COBOL, PL/I) and Operating System (I.E. MVS), collaborating either via proactive process, or reactive problem solving.  With the continuing reduction of IT hardware component costs, the improvement in Operating Systems (E.g. 64-bit architecture) and newer programming languages (E.g. C, C++), it seems that application performing tuning is somewhat of an afterthought, but at what cost?

We all know that the cost of a Mainframe MIPS is significant, and although it might have reduced dramatically from a hardware viewpoint, from a software viewpoint, the cost remains largely static at ~£1,500-£3,500, per year, depending on your configuration.  So if your applications are burning several hundred if not several thousand extra MIPS unnecessarily, that’s very expensive indeed!  Additionally and just as importantly, a badly tuned system will manifest itself in slower transaction response times and longer batch jobs, if applicable, which could impact service availability.  So why is there a seeming reluctance to tune business applications, Mainframe resident or not?

If ever there was a functional IT area where the skills gap has never been wider, then application performance tuning is said skill, when comparing the salty old sea dog Mainframe dinosaur, with the newer Mainframe technician!

From an application development process viewpoint, where does the application performance tuning task live; before or after implementation?  The cynical amongst us will know; if it’s after implementation, there’s a strong likelihood said activity will never be performed!  If it’s before implementation, how many projects incorporate a meaningful stress test, or measure transaction response times versus an SLA or KPI metric?  Additionally, if the project is high-priority and/or running behind schedule, then performance testing is an activity that is easily removed…

Back in the good old days, the late 1980’s to early 1990’s, some application performance tuning tools did start to emerge, most notably Strobe.  Strobe was useful to even the most accomplished of system and application programmer personnel, and invaluable to less experienced personnel, and so arguably Strobe became the de facto software tool for tuning Mainframe applications.  However, later releases of MVS (E.g. OS/390 and z/OS), the non-event that was the Year 2000 (Y2K), seemed to remove the focus on and importance of application tuning.

Arguably most importantly of all, that software MIPS cost item, where Strobe and its competitors (E.g. ASG/BMC TriTune, CA Application Tuner, IBM APA, Macro4 ExpeTune, et al) will utilize even more CPU to capture diagnostic trace information, contributed to the demise of application performance tuning.  However, those companies that have undertaken such application tuning activities in the last decade or so are sitting pretty, having reduced the CPU (MIPS) resource consumed, lowering TCO and optimizing performance accordingly.  In the 21st Century, these software solutions are classified as Application Performance Management (APM) solutions.

Is there a better and easier way to stimulate an interest in the application performance tuning discipline?  If the desire exists to tune an application, lowering CPU MIPS usage, optimizing service performance, then the traditional tools and methods mentioned previously exist, but perhaps a new (or not so new) CPU performance data source exists…

With the introduction of the z10 server, a new function CPU MF (CPU Measurement Facility) was incorporated.  Let’s not forget, z10 is now an n-2 technology, having been superseded by the z196/z114 and the latest zBC12/zEC12 generation of servers.  So each and every committed Mainframe customer should be positioned to benefit from the CPU MF function.

CPU MF provides optional hardware assisted collections of information about logical CPU activity executed over a specified interval in selected Logical Partitions (LPARs).  The CPU MF counters function is intended to be run on a constant basis to collect long-term performance data (I.E. SMF Record 113), in a similar manner to how you collect other performance data.  I have previously briefly discussed how CPU MF SMF data can be used to increase Mainframe Server Capacity Planning efficiencies. 

The CPU MF sampling function is a short duration, precise function that identifies where CPU resources are being used, to help you improve application efficiency.  Put very simply, CPU MF sampling data has minimal CPU overhead (E.g. ~0.1-1.0%) when collecting data (I.E. z/OS Hardware Instrumentation Services – HIS), but this data can then be used to identify CPU “hot spots”, which can then be further analysed to identify the “areas of code” generating the high CPU usage.  However, it was forever thus, whether an APM tool, or CPU MF sampling data, high CPU usage can be identified, but the application programmer must undertake the task of optimizing the application code!

IBM have done a great job in providing CPU MF counters data, optimizing the Capacity Planning process with the SMF 113 record, and the realm of possibility exists with the sample data, but a software solution is required to analyse and summarize this data.

Currently there are very few if only one software solution that analyses CPU MF sample data, namely zHISR from Phoenix Software International.  zHISR interfaces directly with z/OS Hardware Instrumentation Services to collect data for hotspot analysis of customer, vendor, or operating system program execution.  zHISR features include:

  • Support for up to 128 simultaneous data collections events.  zHISR collections do not interfere with any HIS functions including sample or counter collection.
  • System console commands for many zHISR functions.
  • An Application Programming Interface to COBOL and Assembler for starting and stopping data collections. Collection lengths for API generated collections have a time range of one second or more.
  • Ability to schedule a collection with JCL so that collection starts when a given job or step begins.
  • Ability to store data collections as z/OS data sets or UNIX files.
  • Support for collections against CICS/TS transactions.
  • Analysis based on a time range within the collected data for a narrower spotlight on problem code.

An intuitive ISPF dialog allows the user to easily produce a CPU hot spots analysis, which can then be used for identifying the offending code sections.  The user can then drill down and highlight the high CPU CSECT and program offset (instruction), comparing with their Associated Data (ADATA), and thus the source programming instruction.  Therefore the skill required to perform analysis is minimal, as is the CPU overhead in collecting analysis data, and so eradicating the potential barriers when embarking on an application tuning initiative.  Furthermore, the actual cost of deploying the zHISR software is not onerous and so perhaps each and every committed Mainframe user can easily include application performance tuning into their application development lifecycle processes. 

zHISR has a UNIX file system interface that lets you navigate the system and browse or delete files.  With zHISR, users can start and stop hardware event data collections and view the status of the current or prior HIS run.  zHISR also includes a memory display/alter utility that lets you view main storage in the CPU you are logged on to.  If zIIPs are present and zHISR is defined as an authorized subsystem, nearly all of the CPU processing used by zHISR is redirected to a zIIP.

There are also instances, however few and far between, where Mainframe customers have written their own proprietary in-house OLTP (On-Line Transaction Processor) and Relational Database Management Subsystem (RDBMS), where traditional APM software tools can’t provide a solution, only interfacing with underlying subsystems (E.g. Adabas, CICS, DB2, IDMS, WebSphere, et al).  In these instances, CPU MF and zHISR offer a solution to help such customers, who probably face challenges when they upgrade their Mainframe servers, safeguarding software and application code is compatible with the new hardware, and ideally, exploits the latest functionality.

In conclusion, application performance tuning has to be a very important if not mandatory activity for the Mainframe Data Centre.  Whether via CPU MF or traditional APM software solutions, the cost reduction and performance improvement benefits of tuning should be compelling reasons to proactively engage in application tuning activities.  From a skills viewpoint, maybe the KISS (Keep It Simple Stupid) principle can apply, where CPU MF collects the data very simply and efficiently, complemented by zHISR, analysing the data in an intuitive and cost optimized manner.

So turning the subject matter on its head, Application Performance Tuning – Why Bother?  Why not!

Further information can be found from my z/OS Application Performance Tuning presentation, delivered at UK GSE in November 2012.

Data Destruction: Is Your IBM Mainframe Mission Critical Data Always Safe?

Data loss incidents expose businesses and their partners, customers and employees to a plethora of risks and associated problems.  Typically, opportunistic, unauthorized or rogue access to sensitive, personal, confidential and Mission Critical data all too often results in identity theft, competitive business challenges, naming but a few, which adversely impact many areas, including but not limited to:

  • Business reputation and perception
  • Monetary via noncompliance penalties and associated litigation
  • Media coverage
  • Personal consumer credit ratings

Unless businesses implement proactive processes to secure data from creation to destruction, vis-à-vis cradle to grave, data loss challenges might ensue.  In fact, millions of individuals are impacted by data loss every year, as criminals increase their sophistication for gaining unauthorized information access.  The increasing dependence on technology and the potential associated collateral damage risk will continue to grow exponentially.  Thus today there is no such thing as the low-risk organization or low-risk personal information and so it follows that business trustworthiness and data security should be a primary concern.

The full and complete destruction and thus secure erasure of data is a mandatory requirement of both Business and Government regulations, in addition to those policies deployed by each and every business.  Regulatory compliance examples include the EU Data Protection Directive, Payment Card Industry Data Security Standard, Sarbanes-Oxley Act, supplemented by many other compliance mandates, encompassing The UK, Europe, The USA and indeed globally.  There are many occasions when data destruction is required, for example:

  • When disks move to another location for reuse or interim storage
  • When a lease agreement matures and disks are returned to the vendor or sold onto the 2nd user market
  • Following a Disaster Recovery (DR) test, where 3rd party disk and tape devices are used for testing purposes
  • The reuse of disk or tape by a different company group
  • Before discarding and thus scrapping disks and tapes that are to leave the Data Centre

Specifically the Payment Card Industry Data Security Standard states:

9.10.2 Render cardholder data on electronic media unrecoverable so that cardholder data cannot be reconstructed; Verify that cardholder data on electronic media is rendered unrecoverable via a secure wipe program in accordance with industry-accepted standards for secure deletion, or otherwise physically destroying the media (for example, degaussing).

Similarly, NIST Special Publication 800-88 (Guidelines for Media Sanitization) states:

Clear; One method to sanitize media is to use software or hardware products to overwrite storage space on the media with non-sensitive data.  This process may include overwriting not only the logical storage location of a file(s) (e.g., file allocation table) but also may include all addressable locations.  The security goal of the overwriting process is to replace written data with random data.  Overwriting cannot be used for media that are damaged or not rewriteable.  The media type and size may also influence whether overwriting is a suitable sanitization method [SP 800-36: Media Sanitizing].

These data erasure (destruction, cleaning, clearing, wiping) methods would be a pre-cursor to supplementary actions such as purging, destroying and disposal of storage media and devices.

Clearly the IBM Mainframe environment is no different to any other, the requirement to safeguard data is always secure is of paramount importance, while being a mandatory requirement.  Each and every Mainframe data centre will from time-to-time complete some or all of the following activities:

  • Replace tape media (E.g. upgrade, damage, replacement activity, et al)
  • Replace disk subsystems (E.g. upgrade, end of lease, replacement activity, et al)
  • Disaster Recovery test (E.g. utilize 3rd party Data Centre for data restoration)

The major consideration to safeguard data security in these instances is when the data and or related storage media is moved off-site, outside of the primary Data Centre infrastructure.  So maybe we should ask ourselves, why is it when we send data electronically, we safeguard the data with encryption, but when the data or related storage media is physically moved outside of the Data Centre, we don’t necessarily apply the same high levels of data security?

There are z/OS guidelines provided for erasing disk data, primarily relating to use of the ICKDSF TRKFMT function with the ERASEDATA and CYCLES parameters.  It is generally accepted that this process is slow and does not erase data to the exacting standard required by regulatory compliance requirements.  Similarly, DFSMSrmm users can use the EDGINERS ERASE function to erase data and even shred encryption keys for cartridge volumes created by high-function cartridge subsystem drives (I.E. TS1120, TS1130), but once again, this process might be considered slow and limited to those users deploying the DFSMSrmm subsystem, when other Tape Management Subsystems are widely deployed (E.g. AutoMedia/ZARA, CA-1, CA-Dynam/TLMS, CONTROL-T, et al).

There are other options available from the ISV market that have been specifically developed to erase data securely, including FDRERASE or SAEerase for disk data and FATS/FATAR for tape data, but wouldn’t it be useful if there was one software product that could erase both disk and tape data for IBM Mainframe environments?

Unlike other competitive solutions that are specialized for one particular storage media, either disk or tape, XTINCT performs a secure data erase for both disk and tape data.  XTINCT meets all the requirements of US Department of Defense 5220.22-M (Clearing and Sanitization Matrix for Clearing Magnetic Disk) by overwriting all addressable locations with a single character.  XTINCT also meets the sanitization requirement by overwriting all addressable locations with a character, its complement, then a random character, followed by final verification.  XTINCT meets the requirements of most users by overwriting the tape and use of the hi-speed data security erase patterns.  It should be noted, for tapes, the DoD only considers degaussing or pulverizing the tape to be a valid erase!

In addition to providing a complete audit trail and comprehensive reports to satisfy regulators, XTINCT surpasses NIST guidelines for cleaning and purging data.  XTINCT also satisfies all federal and international requirements including Sarbanes-Oxley Act, HIPAA, HSPD-12, Basel II, Gramm-Leach-Bliley and other data security and privacy laws.

From a resource efficiency viewpoint, XTINCT is re-entrant and fully supports sub-tasking.  Multiple volumes can be processed asynchronously, whereas other tools, like ICKDSF, run serially.  XTINCT makes extensive use of channel programs, so many functions operate at peak efficiency by only using enough CPU time to generate the channel programs, with the rest of the operation being carried out by the channel subsystem.  This dictates that XTINCT does not overly utilize valuable CPU time.

The method chosen to safeguard that Mainframe disk and tape data is securely erased, destroyed, cleaned, purged, and so on, is somewhat arbitrary, whereas the deployment of the actual process is required, primarily from a business viewpoint, protecting their valuable consumer data in all circumstances, regardless of the mandatory regulatory security requirements.  Ultimately the Mainframe Data Centre just needs to make a minor enhancement to the data management lifecycle model, to guarantee data security in all circumstances, in this instance, when physical data media (I.E. disk, tapes) moves outside of the primary Data Centre location.

IBM Mainframe Capacity Planning & Software Cost Control Interaction?

The cost of IBM Mainframe software is an extensive subject matter that is multi-faceted and can generate much discussion. The importance of optimizing Mainframe software costs is without doubt, as it is the most significant Mainframe TCO component, having increased from ~25-50%+ of overall expenditure in the last decade or so. Conversely Mainframe server hardware costs have largely stabilized at ~15-25% of TCO in the same time period. However, Mainframe Capacity Planning activities have evolved over the last several decades or so, where hardware costs were the primary concern and the number of IBM Mainframe software pricing mechanisms was limited. Of course, in the last decade or so, IBM Mainframe software pricing mechanisms have evolved, with a plethora of acronyms, ESSO, ELA, IPLA, OIO, PSLC, WLC, VWLC, AWLC, IWP, naming but a few!

Can each and every IBM Mainframe user clearly articulate their Mainframe Capacity Planning and Software Cost Control policies, and which person in their organization performs these very important roles? Put another way, not forgetting Software Asset Management (SAM), should there be a Software Cost Control specialist for IBM Mainframe Data Centres…

If we consider the traditional Mainframe Capacity Planning role, put very simply, this process typically produces a 3-5 year rolling plan, based upon historical data and future capacity requirements. These requirements can then be modelled with the underlying hardware (E.g. z10, z114/z196, zEC12) server, identifying resource requirements accordingly, namely number of General Processors (GPs), Specialty Engines (E.g. zIIP, zAAP, IFL), Memory, Channels, et al. Previously, up until ~2005, customer requirements would be articulated to IBM, cross-referenced with LSPR (Large System Performance Reference) and an optimum hardware configuration derived. Since ~2005, IBM made their zPCR (Processor Capacity Reference) tool Generally Available, allowing the Mainframe customer to “more accurately” capacity plan for IBM zSeries servers.

Other enhancements to more accurately determine the ideal zSeries server include sizing based on actual customer usage data generated by the CPU MF facility introduced with the z10 server. CPU MF delivers a refinement when compared with LSPR, refining the zPCR process with real life customer usage data, compared to the standard simulated LSPR workloads.

In summary, the Mainframe Capacity Planning process has evolved to include new tools and data to refine the process, but primarily, the process remains the same, size the hardware based upon historical data and future business requirements. However, what about Mainframe software usage and therefore cost interaction?

Each and every IBM Mainframe user relies heavily on the IBM Operating System (I.E. z/OS, z/VM, z/VSE, zLinux, et al) and primary subsystems (I.E. CICS, DB2, MQ, IMS, et al). Some Mainframe users might deploy alternative database and transaction processing (TP) solutions, but a significant amount of Mainframe software cost is for IBM software products. In the late-1990’s, IBM introduced their PSLC (Parallel Sysplex License Charges), which offered lower aggregate (MSU) pricing for major IBM software products, based upon an eligible configuration (E.g. Resource Sharing). This pricing mechanism had no impact on software cost control, in fact quite the opposite; it was a significant cost benefit to implement PSLC!

In 2000 IBM announced Workload License Charges (WLC), which allowed users to pay for software based upon the workload size, as opposed to the capacity of the machine; thus the first signs of sub-capacity pricing. In 2001, the ability to deploy IBM eligible software on a “pay for what you use” basis was possible, as per the Variable Workload License Charge (VWLC) mechanism. Put very simply, a Rolling 4 Hour Average (R4HA) MSU metric applies for eligible IBM software products, where software is charged based upon the peak MSU usage during a calendar month. The Mainframe user pays for VWLC software based upon the R4HA or Defined Capacity (Sub-Capacity vis-à-vis Soft Capping), whichever is lowest.

From this point forward, and for the avoidance of doubt, for the last 10 years or so, there has been a mandatory requirement to consider the impact of IBM WLC software costs, when performing the Mainframe Capacity Planning activity. One must draw one’s own conclusions as to whether each and every Mainframe user has the skills to know the intricacies of the various software (E.g. IPLA, OIO, PSLC, WLC, et al) pricing models, when upgrading their zSeries server.

With the IBM Mainframe Charter in 2003, IBM stated that they would deliver a ~10% technology dividend benefit, loosely meaning that for each new Mainframe technology model (I.E. z9, z10), a lower MSU rating of 10% applied for the for the same system capacity level, when compared with the previous technology. Put another way, a potential ~10% software cost reduction for executing the same workload on a newer technology IBM Mainframe; so encouraging users to upgrade. However, the ~10% software cost reduction is subjective, because a higher installed MSU capacity dictates lower per MSU software costs…

With the introduction of the z196 and z114 Mainframe servers the technology dividend was delivered in the form of a new software license charge, AWLC (Advanced Workload License Charges), where lower software costs only applied if this new pricing model was deployed. A similar story for the zEC12 server, the AWLC pricing model is required to benefit from the lower software costs! If these software pricing evolutions were not enough, in 2011 IBM introduced the Integrated Workload Pricing (IWP) mechanism, offering potential for lower software pricing based upon workload type, namely a WebSphere eligible workload. Finally, and as previously alluded to, as MSU capacity increases, the related cost per MSU for software decreases, so there are many IBM software pricing mechanisms to consider when adding Mainframe CPU capacity. So once again, who is the IBM Mainframe Software Cost Control specialist in your organization?

For sure, each and every IBM Mainframe user will engage their IBM account team as and when they plan a Mainframe upgrade process, but how much “customer thinking is outsourced to IBM” during this process? Wouldn’t it be good if there was an internal “checks & balances” or due diligence activity that could verify and refine the Mainframe Capacity Plan with IBM software cost control intelligence?

Having travelled and worked in Europe for 20+ years, I know my peers, colleagues and friends that I have encountered can concur with my next observation. The English and Americans might come up with a good idea and perhaps product, the French are most likely to test that product to destruction and identify numerous new features, while the Germans will write the ultimate technical manual…

zCost Management are a French company that specializes in cost optimization services and solutions for the IBM Mainframe. From an IBM Mainframe Capacity Planning & Software Cost Control Interaction viewpoint, they have developed their CCP-Tool (Capacity and Cost Planning) software solution. This software product bridges the gap between Mainframe Capacity Planning for hardware and the impact on associated IBM software (E.g. WLC, IPLA, et al) costs.

CCP-Tool facilitates medium-term (E.g. 3-5 year) Mainframe Capacity Planning by controlling Monthly License Charges (MLC) evolution, generating cost control policies, optimizing zSeries (E.g. PR/SM) resource sharing, delivering financial management via IBM Mainframe software cost control activity. CCP-Tool integrates with existing data and activities, using SMF Type 70 & 89 records, defining events (I.E. Capacity Requirements, Workload Moves) in the plan, simulating many options, delivering your final capacity plan and periodically (I.E. Quarterly) reviewing and revising the plan. Most importantly, CCP-Tool deploys many algorithms and techniques aligned to IBM software pricing mechanisms, especially WLC and R4HA related.

Therefore CCP-Tool delivers a financial management framework via a medium-term Capacity Plan with associated software cost control and zSeries (E.g. PR/SM) resource policies. This enables a balanced viewpoint of future Data Centre cost configurations from both a hardware and related IBM Mainframe software viewpoint. Moreover, for those IBM Mainframe users that don’t necessarily have the skills to perform this level of Mainframe cost control, CCP-Tool delivers a low cost solution to empower the Mainframe customer to engage IBM on an equal footing, at least from a reporting viewpoint. Similarly, for those Mainframe users with good IBM Mainframe software cost control skills, CCP-Tool offers a “checks & balances” viewpoint, delivering that all important due diligence sanity check! Quite simply, CCP-Tool simplifies the process of reconciling the optimal configuration both from an IBM Mainframe hardware and related software viewpoint.

Without doubt, if a Mainframe user still deploys a hardware centric viewpoint of the capacity planning activity, without considering the numerous intricacies of IBM Mainframe software pricing, in most cases, this could be a significant cost oversight. Put very simply, a low-end IBM Mainframe user of ~150 MSU (1,000 MIPS) might spend ~£1,000,000 per annum, just for a minimal configuration of z/OS, CICS, COBOL and DB2 software, so one must draw one’s own conclusions regarding the potential cost savings, when deploying the optimal zSeries hardware and associated IBM software configuration. I paraphrase Oscar Wilde:

“The definition of a cynic is someone that knows the price of everything, and the value of nothing!”

So, let’s reprise. You have performed your Mainframe Capacity Planning activity, considered historical SMF data for CPU usage, maybe including the R4HA metric, factored in additional new and growth business requirements, refined the capacity plan by using the zPCR tool, perhaps with data input from CPU MF and you now have identified your optimum zSeries Mainframe server?

Maybe you should think again, because the numerous IBM MLC software pricing mechanisms could impact your tried and tested Mainframe CPU hardware planning process. Firstly, for MLC software, the unit cost per MSU reduces, as the installed MSU capacity increases. In simple terms, this encourages the use of “large container” processing entities, LPARs and CPCs. Other AWLC and IWP related considerations further encourages the use of major subsystems (E.g. CICS, DB2, WebSphere, IMS) in larger MSU capacity LPARs and CPCs to benefit from the lowest unit cost per MSU. Additionally, do you really need to run all software on all processing entities? For example, programming languages (E.g. COBOL, PL/I, HLASM, et al) are not necessarily required in all environments (E.g. Test, Development, Production, et al). It is not uncommon for compile and link-edit functions to be processed in Development environments only, while only run-time libraries are required for Production. These “what if” scenarios generated by the numerous IBM MLC software pricing mechanisms must be considered, ideally by an internal resource, with the requisite skills and experience.

Today, who is performing this Mainframe Software Cost Control in your organization? Is it an internal resource with the requisite skills, an independent 3rd party, IBM or nobody? One must draw one’s own conclusions as to whether any of these parties who could perform this vital activity has a vested interest or not, and thus a potential conflict of interests…