System z MLC Pricing Increases: Look After The Pennies…

Recently IBM announced ~4% price increases in z/OS Monthly License Charges (MLC) for selected Operating System and Middleware software programs and associated features. Specifically, price increases will apply to the VWLC, AWLC, EWLC, AEWLC, PSLC, FWLC and TWLC pricing metrics. Notably, SDSF price increases will be ~20% with Advanced Function Printing (AFP) product price increases of ~13-24%. In a global economy where inflation rates for The USA and Western Europe are close to 0%, one must draw one’s own conclusions accordingly. Lets’ not forget that product version changes typically have an associated price increase. From a contractual viewpoint, IBM only have to provide 90 days advance notice for such price changes, in this instance, IBM provided 150+ days advanced notice.

Price increases are inevitable and as always, it’s better to be proactive as opposed to reactive to such changes. As always, the old proverbs always make good sense and in this instance, “look after the Pennies and the Pounds will look after themselves”! This periodic IBM price increase is inevitable, but is not the underlying issue for controlling System z software costs. For many years, since 1994 to be precise, when IBM introduced Parallel Sysplex License Charges (PSLC), the need for IBM Mainframe users to minimize MSU usage has been of high if not critical importance. Nothing has changed in this 20+ year period and even though IBM might have introduced Sub-Capacity and specialty engines to minimize chargeable MSU usage, has each and every System z user optimized their MSU usage? Ideally this would not be a rhetorical question, rather being a “Golden Rule”, where despite organic CPU capacity increases of ~10% per annum, a System z environment could maintain near static IBM MLC software costs.

I have written several blog entries and presented on this subject matter over the years, for example:

The simple bottom line is that System z MLC software accounts for ~20-35% of the overall System z TCO, typically being the #1 expenditure item. For that reason alone, it’s incumbent for each and every System z user to safeguard they have the technical and commercial skills in place to manage this cost item, not as an afterthought, but inbuilt into each and every System z process, from application design, through to that often neglected afterthought, application tuning.

Many System z organizations might try to differentiate between a nuance of System and Application tuning, but such a “not my problem” type attitude is not acceptable and will be imposing a significant financial burden on each and every organization.

A dispassionate and pragmatic approach is required for optimizing System z CPU usage. In this timeframe, let’s examine the ~20% SDSF price increase. IBM will quite rightly state that in conjunction with their z/OS 2.2 release, there are significant SDSF product function advancements, including zIIP offload, REXX interoperability and increased information delivery. However are such function improvements over and above the norm and not expected as a Business As Usual (BAU) product improvements, which should be included in the Service & Support (S&S) or Monthly License Charges (MLC) paid for software?

In October 2013 I wrote a blog entry; Mainframe ISV Software: Is Continuous Product Improvement Always Evident? The underlying message was that an ISV should deliver the best product they can, for each and every release, without necessarily increasing software costs. In this particular instance, the product was an SDSF equivalent, namely (E)JES, which many years ago delivered all of the function incorporated in SDSF for z/OS 2.2, but for a fraction of the cost…

As of 1 November 2015, IBM will start billing cycles for Country Multiplex Pricing (CMP), which requires the October 2015 version of SCRT, namely V23R10. A Multiplex is defined as a collection of all System z servers in one country, measured as one System z server for software sub-capacity reporting. Sub-Capacity program utilization peaks across the Multiplex will be measured, as opposed to separate peaks by System z servers. CMP also provides the flexibility to move and run workloads anywhere with the elimination of Sysplex aggregation pricing rules.

Migrating to CMP is focussed on CPU capacity growth and flexibility going forward. Therefore System z users should not expect price reductions for their existing workloads upon CMP deployment. Indeed there are CMP deployment considerations. A CMP MSU baseline (base) needs to be established, where this MSU Base and associated MLC Base Factor is established for each sub-capacity MLC product and each applicable feature code. These MSU and MLC bases represent the previous 3 Month averages reported by SCRT before commencing CMP. Quite simply, to gain the most from CMP, the System z user must safeguard that their R4HA for each and every MLC product is optimized, before setting the CMP baseline, otherwise CMP related cost savings going forward are likely to be null.

From a very high-level management viewpoint, we must observe that IBM are a commercial organization, and although IBM provide mechanisms for controlling cost going forward, only the System z user can optimize System z MLC cost for their organization. Arguably with CMP, Soft-Capping isn’t a consideration, it’s mandatory.

Put very simply, each and every System z user can safeguard that they look after the Pennies (Cents) and the Pounds (Euros, Dollars) will look after themselves by paying careful attention to System z MLC software costs. Setting a baseline of System z MLC costs is mandatory, whether for the first time, or to set a new baseline for CMP deployment. Maintaining or lowering this System z MLC cost baseline should or arguably must be the objective going forward, even when considering 10% organic CPU growth, each and every year. System z decision-makers and managers must commit to such an objective and safeguard the provision of adequately skilled personnel to optimize such a considerable TCO cost line item (I.E. MLC @ ~20-35% of System z TCO). In an ecosystem with technical resources including DBA, Systems Programmer, Capacity Planner, Application Personnel, Performance Tuning, et al, why wouldn’t there be a specialist Software Cost Manager?

Let’s consider how even an inexperienced System z user can maintain a baseline of System z MLC costs, even with organic CPU capacity growth of 10% per annum:

  • System z Server Upgrade: Higher specification CPU chips or Technology Transition Offering (TTO) pricing metrics deliver 10%+ cost per MSU benefits.
  • System z Specialty Engines: Over time, more and more application workload can be offloaded to zIIP processors, with no sub-capacity MLC software charges.
  • System z Software Version Upgrades: Major subsystems such as CICS, DB2, IMS, MQSeries and WebSphere deliver opportunity to lower cost per MSU; safeguard such function exploitation.
  • Application Tuning: Whether SQL, COBOL, Java, et al, or the overall I/O subsystem, safeguard that latest programming techniques and I/O subsystem functions are exploited.
  • New Application Deployment: As and when possible, deploy new or convert existing workloads to benefit from the optimal MLC pricing metric; previously zNALC, nowadays zCAP.
  • Technical & Commercial Skills Currency: Safeguard personnel have the latest System z software pricing knowledge, ideally from an independent 3rd party such as Watson & Walker.

In conclusion, as householders we have the opportunity to optimize our cost expenditure, choosing and switching between various major cost items such as financial, utility and vehicle products. As System z users, we don’t have that option, only IBM provide System z servers and associated base architecture, namely the most expensive MLC software products, z/OS, CICS, DB2, IMS and WebSphere/MQ. However, just as we manage our domestic budgets, reducing power usage, optimizing vehicle TCO and getting more bang from our buck for financial products various, we can and must deliver this same due diligence for our System z MLC TCO. With industry averages of ~$500-$1000 per MSU for z/OS MLC software and associated annual expenditure measured in many millions, why wouldn’t any System z user look to deliver 10%+ cost per MSU optimization, year-on-year for their organization?

Clearly the cost of doing nothing in this instance, is significant, measured in magnitudes of millions, each and every year. Hence for System z MLC TCO optimization, looking after the Pennies is more than worthwhile, while the associated benefit of the Pounds, Euros or Dollars looking after themselves is arguably priceless.

System z: Optimizing DASD I/O Subsystem Performance

Historically there was a very simple synergy between the IBM S/370 Mainframe and its supporting disk I/O (DASD) subsystem, allowing for Mainframe host to physical and logical disk device (I.E. 3390) connectivity. The analysis and tuning of this I/O subsystem has always been and continues to be supported by the SMF Type 7n records via IBM RMF and the BMC CMF alternative. However, over the years, major advances in DASD subsystems and the System z Mainframe server have delivered many layers of technology resources (E.g. Cache, Memory, FICON Channels, RAID Storage, Proprietary Microcode, et al) and this has introduced complexities into highlighting DASD I/O subsystem performance problems.

The focus of technology based metrics (E.g. I/O Rate Response Time, I/O MB/S Bandwidth, et al) have also been complemented with more meaningful business focussed Service Level Agreements (SLA). Therefore today’s System z I/O Performance Analyst must gather and act upon proactive meaningful information from the ever-increasing amounts of performance data available. Put another way, too much data can deliver not enough information! As previously stated, it was forever thus, RMF and CMF have always collected the requisite performance data available and arguably no other data source is required (E.g. OMEGAMON/TMON/SYSVIEW Performance Monitor, SAS/MXG/MICS/WPS Performance Database). RMF/CMF is the ideal data source for thorough and timely System z I/O performance management, where intelligent analytics and expert knowledge are required to present this “Golden Record”.

However, today’s System z Support Teams need simple and timely presentation of the data, highlighting potential challenges, graphically presented for their Management, allowing for simple tracking of SLA agreements and technology changes (I.E. Software/Hardware Upgrades).

Additionally, Workload Manager (WLM) can control non-paging queued DASD I/O requests, based upon device busy conditional processing. Therefore the z/OS system can manage I/O priorities in a Sysplex, based on WLM service class goals. WLM dynamically adjusts the I/O priority based on service class goal performance and whether a DASD device can influence the overall performance objectives. For obvious reasons, this WLM function does not micro-manage I/O priorities, only changing a service class period’s I/O priority infrequently. WLM is deployed by many System z users to assist in the automated management of system resources (E.g. CPU, Memory, I/O, et al), based upon Service Level goals.

From a DASD subsystem technology viewpoint, there is no longer an obvious one-one direct connection between the Mainframe host and DASD device. An increasing number of technological advances, both microcode and hardware (E.g. Memory, Fibre Channel, Function Assist Processing, et al) have diminished the requirement for data access directly from the physical device. Put another way, in today’s world of System z servers with multiple cache level CPU chips (I.E. Relative Nest Intensity), massive and multiple processor memory resources (I.E. z13 @ 10 TB Memory), high bandwidth Fibre Channel (I.E. FICON, zHPF) subsystem and a hierarchy of DASD memory (I.E. SSD/Flash, Cache), it’s not uncommon to consider an I/O that requires physical device access as a problem! Finally and most importantly, from a DASD subsystem viewpoint, each of the recognized System z DASD providers, EMC (Symmetrix VMAX), HDS (VSP G1000) and IBM (DS8870) have highly proprietary DASD subsystems that provide z/OS plug compatibility, but deliver overall I/O performance using their own unique architecture and internal algorithms.

Of course, an over configured hardware environment will deliver a poor TCO, while an under configured environment will manifest in SLA issues and bad user experiences, where the middle-ground always delivers the optimal environment. Resource optimization always demands proactive day-to-day management, from an internal and indeed external communication viewpoint. With the highly proprietary design features of the IHV DASD subsystems, whether EMC, HDS or IBM, having the right information and identifying the precise problem, simplifies the communication process with the IHV. Such communication might highlight a resource under provision (E.g. Memory Capacity), a subsystem setting tweak requirement, either host or subsystem based, or indeed a hardware failure. In today’s world, these issues need to be fixed in minutes or hours, not days or weeks.

Therefore, where does today’s System z I/O Performance Analyst start to collect the required information to safeguard that their DASD subsystem is optimized, both from a capacity and performance viewpoint?

A simplistic viewpoint of an I/O health-check should consider the following:

  • Service Level Agreements (SLA): Are overall objectives being delivered or missed?
  • User Experience: Are users (customers) complaining of poor service or response times?
  • I/O Metric Performance: Are there obvious signs of abnormal performance statistics?

Several decades ago, an overall I/O health check might have been a periodic (E.g. Weekly or longer) activity, whereas today it’s undoubtedly a Business As Usual (BAU) and 24*7 activity. Therefore a fully automated solution is required, built upon the tried and tested System z performance fundamentals, namely RMF or CMF. The ideal solution will perform analytics based data reduction, presenting the right information, at the right time, allowing for intelligent business based communication, both internally, to customers and end users from an SLA viewpoint, and externally, with IHV DASD suppliers, safeguarding optimal performance and TCO.

EADM (Easy Analyze DASD Mainframe) is a solution from Technical Storage that performs automated performance analysis of the z/OS I/O subsystem, delivering predictive analytics for better storage capacity planning and performance measurement. The Technical Storage EADM architects have in excess of 40 years IBM Mainframe experience, specializing in the I/O subsystem, and so it’s no surprise that EADM delivers expert and timely knowledge via an easy-to-use solution.

EADM is an easy-to-install and easy-to-use plug-and-play solution that has no proprietary considerations, requiring no additional System z resource (E.g. CPU, Memory, DASD, et al) requirements. Installed on Microsoft server platforms, EADM is easily virtualized via VMware, Hyper-V, et al, requiring no target database for performance data storage. EADM performs a daily health check of the entire System z disk subsystem. EADM works around the clock, delivering customized and automatic user friendly GUI type reports. For today’s System z technician, the open and IP architecture base of EADM allows for secure remote access via Mobile, Tablet or Laptop devices, as and when required.

Operations and performance teams are alerted as soon as performance variances occur, typically in minutes, assisting in the identification of underlying root problems, causing changes in system behaviour. Incorporating intelligent and meaningful I/O performance indicators, with drill-down and zoom-in ability, storage technicians can determine if the problem is temporary, permanent, local or global. By simplifying the data reduction process (E.g. RMF/CMF data from numerous LPAR/Sysplex environments), EADM safeguards that the internal technical team can efficiently manage their ever increasingly complex and large DASD environment, for intelligent and timely communications with internal business teams and external suppliers alike.

EADM simplifies the System z I/O subsystem capacity and performance management process, delivering expert reports and timely historical analysis, for example:

  • Automatic daily (24 Hour) analysis of Sysplex wide workload (On-Line TP & Batch) I/O response times
  • Systematic intelligent alerts of early performance variances with exact occurrence time indicators
  • Identification of I/O performance hot-spots with DASD volume and data set level granularity
  • Performance trending at DFSMS Storage Group, Subsystem LCU and DASD volume level
  • DR (E.g. PPRC) simulations to prevent data loss and forecast Data Centre failover scenarios
  • I/O subsystem WLM indicators to determine exactly what impacts performance objectives
  • Full FICON channels and zHPF analysis, incorporating typical I/O throughput indicators
  • HyperPAV and associated LCU indicators to easily balance volumes, optimizing PAV alias allocation
  • Performance monitoring and balancing via intelligent LCU, SSID and I/O analytics
  • DASD capacity usage via DCOLLECT data, comparing assigned vs. allocated vs. actual disk utilization
  • EADM supports entry-level several LPAR and complex multiple CPC/LPAR System z configurations

A well provisioned and performing System z I/O subsystem is of vital importance for safeguarding today’s ever increasing storage requirements of mission critical business applications. A poorly performing I/O subsystem will generate unnecessary and extra CPU overhead, with potential and tangible TCO impact, in conjunction with potential business impact. Although the advances of the System z server and underlying DASD I/O subsystem can compensate for many application code or data placement issues, the fundamental concepts of analysing and tuning the I/O subsystem remain.

Therefore the savvy and proactive System z customer will safeguard that they find a solution to deliver optimal DASD I/O performance. Without doubt, such an analysis could be performed by a highly-skilled individual, but today’s 21st Century world demands a hybrid of technical and commercial skills. Therefore a solution that incorporates the diagnostic knowledge of the most highly trained technician, performs intelligent analytics on a plethora of Sysplex wide performance data sources and presents the information required, is one that will deliver benefit each and every day. EADM is an example of such a solution, delivering demonstrable System z TCO optimization benefits, while safeguarding a short-term ROI, with simple deployment and resource utilization attributes.

System z Meets Open Source Linux

Recently IBM launched their LinuxONE offering, packaged in the most powerful and secure enterprise server, namely System z, designed for the new application economy and hybrid cloud era. Although IBM has provided Linux support for the Mainframe server since 2000, this LinuxONE packaging promises a unified portfolio of hardware, software and services solutions for mission-critical Linux applications.

To supplement the existing SUSE and Red Hat support, Ubuntu is included, along with Open Source enablement, including Apache Spark, Chef, Docker, MariaDB, MongoDB, Node.js and PostgreSQL, endeavouring to provide clients with choice and flexibility for hybrid cloud deployments.

From a big picture viewpoint, LinuxONE can be summarised as:

  • Linux Your Way: Choose the Linux environment and tools for your organization
  • Linux Without Limits: Benefit from Enterprise Class Linux support
  • Linux Without Risk: Safeguard business applications with the secure and resilient System z Server

The LinuxONE Systems are classified as Emperor and Rockhopper, loosely classified as High-End and Entry-Level System z servers. LinuxONE Emperor delivers ultimate flexibility, scalability, performance and security trust for mission-critical applications. Scalability is as per the latest z13 server, allowing growth to handle the most demanding workloads. LinuxONE Rockhopper delivers the entry point into the LinuxONE family, offering all the same great capabilities and value, with the flexibility of a smaller package.

LinuxONE includes a choice of hypervisors and management tools, namely KVM for LinuxONE and/or IBM z/VM. This virtualization capability claims support for up to 8000 virtual servers (several thousand containers) in a single System z server footprint, allowing for parallel processing of Test, Development and Production environments. Additionally, new servers and containers can be initialized and running in minutes, with automated resource provisioning and reallocation in seconds.

From a performance viewpoint, System z metrics apply; fast CPU processors, significant I/O capability and 10 TB Memory, all delivering consistent and predictable sub-second response times for thousands of users. A reported capability of 30 Billion RESTful web transaction per day, with ~500,000 database read/write operations per second.

The LinuxONE offering is also a key component of the IBM Cloud, Analytics, Mobile & Security (CAMS) framework:

  • Cloud: An agile and trusted cloud infrastructure to meet new business demands with greater efficiency and lower costs for IT service delivery. Example cloud usage includes Database, Enterprise Systems of Record and Hybrid Platform cloud platforms.
  • Analytics: Flexible, resilient, high performance business and operational analytics for Business Intelligence, Big Data Insights and Operational Analytics for intelligent and continuous business availability.
  • Mobile: Build a premier mobile solution for your business to deliver the best possible experience for your clients, employees and partners alike. Facilitate agile development and deployment of mobile applications, with secure end-to-end mobile transactions, personalized via integrated data analytics.
  • Security: System z has been associated with the highest EAL5+ Common Criteria certification for many years, safeguarding mission-critical data from cradle-to-grave. Security functions such as full data encryption, cryptographic processors and end-to-end security, combined with the unmatched reliability and availability of the System z server, safeguarding mission-critical data and services are fully protected and available.

Finally and a key point, LinuxONE promises TCO optimization with pricing your way. A straightforward menu of pricing options include:

  • A fixed monthly cost usage model for hardware and software resources
  • A per core software pricing model, with 30 days notice for cancellation or resource change
  • A 36 month rental option, with buy/replace/return options at contract end

In theory, LinuxONE could be perceived as just a tweak of existing System z Linux options, including the most recent z13 server, Ubuntu and Open Source support. What has changed are user requirements, the requirement for flexible and agile computing, where Cloud, Analytics, Mobile and Security dominate many CIO agendas.

It is my hope that each and every CIO, System z literate or not, at least considers the LinuxONE platform for their mission-critical enterprise workload, as from a simplistic viewpoint, LinuxONE is just another ubiquitous black server box; or is it…

How Can We Energize Our Emerging zCommunity?

No doubt we have all experienced that most things in life and business are cyclical, hence the terms déjà vu, those who cannot remember the past are condemned to repeat it, et al…

For System z, with the glass half-full, there are encouraging signs of pragmatic and collaborative executive leadership from the supplier ecosystem; for example, BMC, Compuware and IBM collaborating on a Standard Software Product Install Methodology For All Vendors. With the glass half-empty, even though there are proven statistics to demonstrate the penetration of System z in global large organizations, there are still some misplaced legacy perceptions associated with System z, from significant executive leaders.

Just as the IBM Mainframe automated business processes more than several decades ago, introducing IT into the business workplace forever more, we’re currently undergoing another IT revolution. Quite simply, an exponential growth in data, typically associated with Cloud, Analytic, Mobile & Social technologies. With this in mind, we should always be mindful that an IT solution should solve a business challenge and/or provide value for a business requirement. Therefore, the business themselves are best placed to articulate the framework and ultimate size and shape of solutions delivered by the vendor community.

The IBM Mainframe environment has always benefitted from User Groups that conceptually represent the customer, articulating requirements to IBM for future IBM Mainframe enhancement. For the avoidance of doubt, SHARE in The USA, celebrating its 60th anniversary in 2015, with SHARE Europe, the forerunner to GSE, being founded in 1959. These groups are the ideal forums for collecting and articulating user requirements to IBM, for IBM Mainframe and current System z evolution. Without doubt, there has been a resurgence in support for SHARE USA and GSE events in the last decade or so, but from a dispassionate viewpoint, how many IBM Mainframe customers are members of these User Groups?

As previously referenced, the executive leadership of major System z Mainframe vendors are demonstrating a willingness to collaborate. Perhaps now is an ideal time for the System z Mainframe customer to articulate their requirements to the major System z Mainframe vendors?

My admiration for those volunteers that contribute their time, knowledge and passion to User Groups such as SHARE and GSE is without doubt. I’m also positive that these User Groups would welcome the opportunity to represent a larger number of System z end users, which would no doubt generate more end user presentations at conferences, supplemented by generic and business orientated user requirements for System z ecosystem vendors to consider. This can only happen if the end users of the IBM System z Mainframe platform embrace this opportunity to shape the future of the System z Mainframe, as it rapidly evolves, both in technological advancement and an emerging willingness for collaboration from vendors.

Having worked with IBM Mainframes for over 30 years, I’m no longer surprised about the quality and professionalism of personnel I encounter at user sites. A granularity of knowledge can sometimes be applied, with all-rounders demonstrating savvy technical and commercial knowledge at small capacity installations and Subject Matter Experts (SME), typically in larger capacity installations, demonstrating level 3 diagnostic capability. In an ideal world, the executive leadership at these System z Mainframe user sites should also participate in a forum of like-minded peers, allowing them to embrace and value the System z platform. There are certainly such Senior Management streams at SHARE and GSE events, but once again, if the System z end user isn’t a User Group member and/or doesn’t attend these events…

In our real life domestic environments, we can lobby our local government official (Member Of Congress/Parliament, MC/MP, et al), allowing for generic or specific representation for all people alike. In theory, in an evolving IT world, there is no reason why a System z Mainframe user can’t lobby a vendor for a user requirement. As always, no one of us, is as good as all of us! Therefore just as System z Mainframe vendors are collaborating, as and when practicable, now is the time for the System z Mainframe end users to collaborate, no matter how large or small, for the benefit of all. Given that the forums for collaboration already exist, for example SHARE USA and GSE, System z end users can easily leverage from these User Groups, to generate a coherent and notable voice.

Wouldn’t it be fantastic if 80%+ of System z Mainframe end users were User Group (E.g. SHARE, GSE) members and several of their technicians and one senior manager attended their local annual conference? The cost, minimal, the value, arguably priceless!

From my own viewpoint, I have recent real-life experience of engaging a major System z vendor, with a commercial user requirement collected from tens of smaller capacity Mainframe users, where said submission is being considered. This is perhaps a brave new world…

DevOps: What Does It Mean For System z?

A recent buzzword in the IT industry is DevOps, being a term for eradicating any gap between the IT disciplines and/or processes of Development and Operations. In simplistic terms, Development is the full application code lifecycle, while Operations is the management and ultimate delivery of IT business services, typically Production orientated. However, what does this mean for the System z environment?

From a big picture viewpoint, the typical mission critical business application comprises many layers, including System z and other Distributed Systems platforms. Even though there are many solutions and “dashboard” type approaches for Operations to manage the IT service, there will always be differences when managing IT platforms, whether System z, Wintel, UNIX, Linux, et al.

Additionally, there may be some interpretation as to what DevOps is and should be from an ISV viewpoint. If you’re an ISV with a rich history in performance management, your viewpoint of DevOps will be identifying and resolving performance problems, because you believe a performance problem will manifest itself in a Production Operations environment, but is ideally fixed in the Applications environment. Conversely, if you’re an ISV with a software portfolio incorporating many Application Development solutions, your viewpoint will be streamlining the Applications Development lifecycle for all platforms, expediting the delivery of Production changes, simplifying the burden on associated Operations Change and Problem Management processes.

Clearly the System z environment has matured over many years and application code portfolios have been managed by SCM tools such as CA Endevor SCM, Serena ChangeMan, ISPW, et al. Even the acronym SCM has various interpretations, whether Source Code Management, Software Configuration Management or some other term.

Recently agile workstation solutions that simplify the application development process have evolved, for example IBM RDz (Rational Developer for z Systems), Compuware Workbench, typically incorporating Eclipse function, allowing for a common framework of multiplatform application code development.

By definition, System z means zero downtime and as such, due diligence, continuity and no/minimal impact regression have been built into each and every change process for many years. Therefore from a Systems Programming viewpoint, any heterogeneous DevOps technical frameworks that might emerge will have little relevance to existing System z processes. However these System z oriented change processes could and no doubt should be recognized by the DevOps framework, extending the System z approach to all platforms.

Whatever your viewpoint and whatever System z tooling your organization deploys for end-to-end Application Lifecycle Management, including Development and Operations, you should not lose sight that an objective of DevOps is to bring together the various IT departments that are impacted by Production Service changes. Therefore if only from a simple communication and collaboration viewpoint, even the most mature and maybe bigoted System z professional should embrace DevOps.

In conclusion, DevOps is an evolving framework that will facilitate quality controlled continuous application delivery for multiple platform business solutions, typically including the Systems z platform. By definition, DevOps encompasses many IT processes, Development and Operations as a minimum, where each and every organization probably has their own interpretation of where interdependent Systems Management functions interact; for example, Performance Management, Change Management, Problem Management and even Capacity Planning. The savvy organization will embrace DevOps as a framework, review their existing software function tooling and in all likelihood, deploy a best-of-breed approach when facilitating continuous application delivery for heterogeneous platforms. It is unlikely that one ISV will provide a fully inclusive best-of-breed software portfolio for DevOps, hence the universal, open and platform independent approach of Eclipse.

IBM System z PartnerWorld Solution Development Evolution

Currently there are in excess of 2,300 companies delivering solutions for IBM System z listed in the IBM Global Solutions Directory. Considering the number of global System z customers, currently estimated as ~4,000, this is quite a good ratio! It’s also evidence of the significant ability of this System z ecosystem to deliver innovation and support to said customer base. Maybe we should consider how these System z solution delivery businesses develop and maintain their software, hardware and service offerings…

Obviously to develop, support and enhance an IBM Mainframe software or hardware product, access to an IBM Mainframe is a mandatory requirement. In the 1980’s, procuring an IBM Mainframe was an expensive undertaking, hence the number of IBM Mainframe IHV (Hardware) or ISV (Software) partners was limited. Therefore we should not overlook the evolution that has taken place in the last 25 years or so, delivering the significant, diverse, innovative and global System z ecosystem in place today.

In the early 1990’s the IBM Advanced Workstations Systems Division (AWS) worked on delivering complete compatibility with existing IBM Mainframe operating systems and software, delivering this function in the S/390 Processor Card. Later iterations of this S/390 Processor Card offered plug compatibility with RISC and PC server architectures, packaged as R/390 and P/390 servers respectively. In essence these R/390 and P/390 server solutions delivered “A Mainframe In A Box”. Put another way, the entire IBM Mainframe infrastructure including CPU, Memory, I/O Subsystem, Consoles, Disk, Tape, Networking Interfaces, et al, were all contained within the one PC or RISC based server footprint. Some of the software modules we might be familiar with for delivering this functionality are AWSDISK, AWSPRINT and AWSTAPE, where the respective function is denoted by the module name.

Therefore with the R/390 and P/390, subsequently followed by the S/390 Integrated Server and then MP3000, low cost access to IBM Mainframe servers was possible. However, let’s not forget that in conjunction with hardware compatibility, low cost access to existing IBM Mainframe operating systems and software was also required. This software access was delivered by the Application Developers Controlled Distributions (ADCD), incorporating a package of the majority of IBM Operating System and supporting subsystem program products. Therefore, once a business proved its intentions in developing a software or hardware solution for the IBM Mainframe, they gained very low cost access to said IBM Mainframe software. Without doubt, the innovation of the S/390 Processor Card and Application Developers Controlled Distributions (ADCD) resources, allows the System z community to benefit from the related ecosystem in place today.

This IBM Mainframe emulation capability provided the opportunity for other 3rd party supplier to deliver x86 servers that supported the IBM PartnerWorld for Developers (PWD) ADCD initiative. For example, FLEX-ES from Fundamental Software.

Currently, IBM deliver the System z Personal Development Tool (zPDT) for ADCD access, while many ISV’s and IHV’s now actually deploy an official IBM System z server, for example, zBC12, as the cost of Mainframe servers has reduced substantially in the last decade or so. Optionally, recognizing the virtualization capabilities of System z and higher speed network access, System z development can now be achieved remotely. The System z Remote Development Program (zRDP) for z/OS, z/VM and z/VSE provides qualified partners with remote access to supported generally available and supported operating systems and software products. Additionally, IBM has built a number of Innovation centres globally (I.E. Africa & Middle East, Asia Pacific, Europe, Latin America, North America), facilitating the possibility for System z innovation with local resources.

An example of the diversity and innovation of the System z ecosystem is the SVA zHosting concept, allowing an IBM PartnerWorld for Developers (PWD) member and/or Independent Software Vendor (ISV) the ability to port existing or install new development environments into a local fully certified IBM System z Mainframe data centre, in this case, located in Germany.

In conclusion, as other IT technologies have evolved, IBM have provided a cost-efficient environment, encouraging and maintaining the IBM System z Mainframe ecosystem. Firstly in the 1990’s with full emulation for RISC and PC based servers and more latterly in the 21st Century with remote access. This low cost access to full System z capability, safeguards that the System z ecosystem remains significant, current, diverse, while the realm of possibility for innovation exists.

Java: Is System z A Viable Server Platform?

As long ago as 1997, IBM integrated Java into their IBM Mainframe platform, in those days via the then flagship OS/390 Operating System. As with any new technology, perhaps the initial OS/390 Java integration offerings were not perfect, but some ~20 years later, a lot has changed…

In 2000, IBM Java SDK 1.3.1 delivered z/OS and Linux on z support, quickly followed by 64-bit z/OS support in 2003 via SDK 1.4. In 2004 Java Virtual Machine (JVM) and JIT (Just-In-Time) compiler technology support was provided, while Java code has always exploited IBM specialty engines, primarily zAAP initially and now via zIIP and the zAAP on zIIP capability. Put simply, IBM continues to invest aggressively in Java for System z, demonstrating a history of innovation and performance improvements, up to and including the latest z13 server.

So why should a 21st century business consider the System z platform for Java workloads?

Arguably the primary reason is a rapidly emerging requirement for the true 24*7*365 workload, which cannot accommodate a batch window, where Java is ideally placed to serve both batch and OLTP workloads. Put another way, the need to process batch work has not gone away, whereas a requirement to process batch work concurrently with OLTP services has emerged. Of course, traditionally the typical System z enterprise might have two sets of IT staff for OLTP and batch workloads, typically in the IT Support and Application Management teams, whereas via Java and a workload centric approach, separate batch and OLTP support personnel are not necessarily required.

For the System z platform, Java support has always been incorporated into the core architectural building blocks, namely z/OS, CICS, DB2, IMS, WebSphere, Batch Runtime, et al. Therefore there are no functional reasons why new applications or indeed existing applications cannot be engineered using the pervasive Java programming language and deployed on the System z platform.

Quite simply, Java is a critically important language for IBM System z. Java has become foundational for data serving and transaction serving, the traditional strengths of IBM System z. WebSphere applications written in Java and processing via System z, benefit from a key advantage through co-location. This delivers better response times, greater throughput and reduced system complexity when driving CICS, DB2 and IMS transactions.

Java is also critical for enabling next generation workloads in the IBM defined Cloud, Analytics, Mobile & Security (CAMS) framework. Cloud and mobile applications can access z/OS data and transactions via z/OS Connect and other WebSphere solutions, all inherently Java based. Java on System z also provides a full set of cryptographic functions to implement secure solutions. A key strength of Java applications is the ability to immediately benefit from the latest hardware performance improvements using the Just In-Time (JIT) compiler incorporated in the latest IBM Java SDK releases.

Let’s not forget, there are many other good reasons why Java might be considered as a viable application programming language:

  • Personnel Skills Availability: Java is typically ranked in the top 3 of most widely used programming languages; therefore personnel availability is abundant and cost efficient.
  • Application Code Portability: Recognizing Java bytecode and associated JVM functionality, no matter what the platform (E.g. Wintel, X86 Linux, UNIX, z/OS, Linux on System z, et al), the Java application code should process without consideration.
  • Application Tooling Support: Application Development tools have evolved to the point of true platform independence, where Application Programmers just create their code, they don’t necessarily know or sometimes care, where that code will execute. Let’s not forget the simplification of Java code for OLTP and batch workloads, reducing associated IT lifecycle support costs.
  • TCO Efficiencies: Simplified Application Development and deployment reduces associated cost, while reducing implementation time for mission-critical workloads. Java exploitation of the zAAP (zAAP on zIIP) safeguards low software costs and optimized processing times (I.E. Sub-Capacity specialty engines run at full speed).

With the announcement of the zEC12 server, notable Java enhancements included:

  • Hardware Transaction Memory (HTM) – Better concurrency for multi-threaded applications
  • Run-Time Instrumentation (RI) – Innovation of a new hardware facility designed for managed runtimes
  • 2 GB Page Frames – Improved performance targeting 64-bit heaps
  • Pageable 1 MB Large Pages (Flash Express) – Better versatility of managing memory
  • New Software Hints/Directives – Data usage intent improves cache management; Branch pre-load improves branch prediction
  • New Trap Instructions – Reduce implicit bounds/null checks overhead

In summary, System z users can expect up to 60% throughput performance improvement amongst Java workloads measured with zEC12 and the IBM Java 7 SR3 SDK.

IBM z13 and the IBM Java 8 SDK deliver improved Java performance, including Single Instruction Multiple Data (SIMD) vector engine, Simultaneous Multi-Threading (SMT) and improved CP Assist for Cryptographic Function (CPACF). Delivering up to 2X improvement in throughput-per-core for security-enabled applications and up to 50% improvement for other generic applications.

Other z13 Java functional and performance improvements include:

  • Secure Application Serving – Application serving with Secure Socket Layers (SSL) will exploit the new Java 8 Clear Key CPACF and SIMD vector instructions for string manipulation. An additional 75% performance improvement for Java 8 on z13 with SMT versus Java 8 on zEC12.
  • Business Rules Processing – Business rules processing with Java 8 takes advantage of the SIMD vector instructions and SMT for zIIP specialty engines on z13 to achieve significant improvements in throughput-per-core. An additional 37% performance improvement from z13 SMT zIIPs with Java 8 versus Java 8 on zEC12.
  • Specific z/OS Java 8 Exploitation of z13 SIMD – Java 8 exploits the new z13 SIMD vector hardware instructions for Java libraries and functions. These SIMD vector hardware instructions on z13 for improved performance, where specific idioms/operations were improved by between 2X and 60X. Performance benefits for real life Java applications will be dependent on how frequently these idiom/operations are used.

In conclusion, the IBM commitment to Java on System z is clearly evident and the cost, performance and security proposition becomes compelling on the latest zEC12 and z13 Mainframe servers. The pervasive deployment of Java as a universal IT programming language dictates that programmer availability will never be an issue, and platform independence dictates that Java applications can be created and processed on any platform. Let’s not forget, the strong single thread performance and I/O scalability of System z as a significant differentiator when comparing Java performance on any IT platform.

Moreover, as always, perhaps the business dictates what platform is the most suitable for business applications. The evolution to a combined OLTP and batch workload for the 21st Century 24*7*365 mission critical business application, ideally places Java as an eminently viable programming language. Therefore there is no requirement to reengineer any existing System z application, or to find an alternative platform for new business functions. As always, the System z Mainframe platform should never be overlooked…