Enabling IBM Z Security For The Cloud: Meltdown & Spectre Observations

The New Year period of 2018 delivered unpleasant news for the majority of IT users deploying Intel chips for their Mission Critical workloads.  Intel chips manufactured since 1995 have been identified as having a security flaw or bug.  This kernel level bug has been identified as leaking memory, allowing hackers access to read sensitive data, including passwords, login keys, et al, from the chip itself.  It therefore follows, this vulnerability allows malware inserts.  Let’s not overlook that x86 chips don’t just reside in PCs, their use is ubiquitous, including servers, the cloud and even mobile devices and the bug impacts all associated operating systems, Windows, Linux, macOS, et al.  Obviously, kernel access just bypasses everything security related…

From a classification viewpoint, Meltdown is a hardware vulnerability affecting a plethora of Intel x86 microprocessors, ten or so IBM POWER processors, and some ARM and Apple based microprocessors, allowing a rogue process to read all memory, even when not authorized.  Spectre breaks the isolation between different applications, allowing attackers to trick error free programs, which actually follow best practices, into leaking sensitive data and is more pervasive encompassing nearly all chip manufacturers.

There have been a number of software patches issued, firstly in late January 2018, which inevitably caused other usability issues and the patch reliability has become more stable during the last three-month period.  Intel now claim to have redesigned their upcoming 8th Generation Xeon and Core processors to further reduce the risks of attacks via the Spectre and Meltdown vulnerabilities.  Of course, these patches, whether at the software or firmware level are impacting chip performance, and as always, the figures vary greatly, but anything from 10-25% seems in the ball-park, with obvious consequences!

From a big picture viewpoint, if a technology is pervasive, it’s a prime target for the hacker community.  Windows being the traditional easy target, but an even better target is the CPU chip itself, encompassing all associated Operating Systems.  If you never had any security concerns from a public cloud viewpoint, arguably that was a questionable attitude, but now these rapidly growing public cloud providers really need to up their game from an infrastructure (IaaS) provision viewpoint.  What other chip technologies exist that haven’t been impacted (to date), by these Meltdown and Spectre vulnerabilities; IBM Z, perhaps not?

On 20 March 2018 at Think 2018 IBM announced the first cloud services with Mainframe class data protection:

  • IBM Cloud Hyper Protect Crypto Services: deliver FIPS 140-2 Level 4 security, the highest security level attainable for cryptographic hardware. This level of security is required by the most demanding of industries, for example Financial Services, for data protection.  Physical security mechanisms provide a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.  Hyper Protect Crypto Services deliver these highest levels of data protection from IBM Z to IBM Cloud.  Hyper Protect Crypto Services secures your data in a Secure Service Container (SSC), providing the enterprise-level of security and impregnability that enterprise customers have come to expect from IBM Z technology.  Hardware virtualisation protects data in an isolated environment.  SSC safeguards no external data access, including privileged users, for example, cloud administrators.  Data is encrypted at rest, in process and in flight.  The available support for Hardware Security Modules (zHSM) allows for digital keys to be protected in accordance with industry regulations.  The zHSM provides safe and secure PKCS#11 APIs, which makes Hyper Protect Crypto Services accessible by popular programming languages (E.g. Java, JavaScript, Swift, et al).
  • IBM Cloud Hyper Protect Containers: enable enterprises to deploy container-based applications and microservices, supported through the IBM Cloud Container service, managing sensitive data with a security-rich Service Container Systems environment via the IBM Z LinuxONE platform. This environment is built with IBM LinuxONE Systems, designed for EAL5+ isolation and Secure Services Containers technology designed to prevent privileged access from malicious users and Cloud Admins.

From an IBM and indeed industry viewpoint, security concerns should not be a barrier for enterprises looking to leverage from cloud native architecture to transform their business and drive new revenue from data using higher-value services including Artificial Intelligence (AI), Internet of Things (IoT) and blockchain.  Hyper Protect Crypto Services is the cryptography function used by the that IBM blockchain platform.  The Hyper Protect Crypto Services – Lite Plan offers free experimental usage of up to 10 crypto slots and is only deleted after 30 days of inactivity.

In a rapidly changing landscape, where AI, Blockchain and IoT are driving rapid cloud adoption, the ever-increasing cybersecurity threat is a clear and present danger.  The manifestation of security vulnerabilities in the processor chip, whether Apple, AMD, Arm, IBM, Intel, Qualcomm, et al, has been yet another wake-up alert and call for action for all.  Even from an IBM Z ecosystem viewpoint, there were Meltdown and Spectre patches required, and one must draw one’s own conclusions as to the pervasive nature of these exposures.

By enabling FIPS 140-2 Level 4 security via Cloud Hyper Protect Crypto Services and EAL5+ isolation via Cloud Hyper Protect Containers IBM Z LinuxONE, if only on the IBM Cloud platform, IBM are offering the highest levels of security accreditation to the wider IT community.  Noting that it was the Google Project Zero team that identified the Meltdown and Spectre vulnerability threats, hopefully Google might consider integrating these IBM Z Enterprise Class security features in their Public Cloud offering?  It therefore follows that all major Public Cloud providers including Amazon, Microsoft, Alibaba, Rackspace, et al, might follow suit?

In conclusion, perhaps the greatest lesson learned from the Meltdown and Spectre issue is that all major CPU chips were impacted and in a rapidly moving landscape of ever increasing public cloud adoption, the need for Enterprise Class security has never been more evident.  A dispassionate viewpoint might agree that IBM Z delivers Enterprise Class security and for the benefit of all evolving businesses, considering wider and arguably ground-breaking collaboration with technologies such as blockchain, wouldn’t it be beneficial if the generic Public Cloud offerings incorporated IBM Z security technology…

System z DevOps & Application Lifecycle Management (ALM) Integration: Evolution or Revolution?

From an IT viewpoint, seemingly the 2010’s decade will be dominated by the digital data explosion, primarily fuelled by Cloud, Mobile and Social Media data sources, while intelligent and timely if not real-time Analytics are required to process this vast and ever-growing data source.  Who could have imagined just a decade ago that the Mobile Phone, specifically Smartphone would be the de facto computing device, although some might say, only for a certain age demographic?  I’m not so sure, I encounter real-life and day-to-day evidence that a Smartphone or tablet can also empower the older generation to simplify their computer usage and access.  From a business perspective, Smartphones have allowed geographically dispersed citizens gain access to Banking facilities for the first time; Cloud allows countless opportunities for data sharing and number crunching for collaborative scientific, health, education and anything else a human being might conceive activities.  The realm of opportunity exists…

When thinking of the bigger picture, we somehow have to find a workable and seamless balance that will integrate the dawn of business computing from the 1960’s to these rapidly moving 21st Century requirements.  When considering which came first, the data or the application, I always think the answer is really simple; the data came first, but I have been wrong before!  What is without doubt, the initial requirement for a business application was to automate data processing and the associated medium-term waterfall (E.g. n-nn Months) application development process is now outdated.  As of 2017, today’s application needs to leverage from this vast and rich digital data source, to identify and leverage new business opportunities, increasingly unplanned and therefore rapid application delivery is required.  For example, previously I wrote about this subject matter in the zAPI: System z Deployment Into The API Economy blog entry.

From an IT perspective, one of the greatest achievements in the 21st Century is collaboration, whether technology based, leveraging from a truly interconnected (E.g. Internet Protocol/IP) heterogeneous computing environment, or personnel based, with IT teams collaborating in a more open and timely manner, primarily via DevOps.  This might be a better chicken and egg analogy; which came first, the data explosion or an IT ecosystem that allowed such a digital data explosion?

There are a plethora of modern-day application development tools that separate the underlying target deployment server from the actual application developer.  Put another way, today’s application developer ideally works from a GUI display via an Eclipse-based Integrated Development Environment (IDE) interface, creating code using rapid and agile development techniques.  From an IBM System z perspective, these platforms include Compuware Topaz Workbench, IBM Developer for z Systems (IDz AKA RDz) and Micro Focus Enterprise Developer, naming but a few.  Therefore when considering the DevOps framework, these excellent Eclipse-based IDE products provide solutions for the Dev part of the equation; but what about the Ops part?

In a collaborative world, where we all work together, from an Application Lifecycle Management (ALM) perspective, IT Operations are a key part of application delivery and management.  Put simply, once code has been created, it needs to be packaged (E.g. Compile, Link-Edit, et al), tested (E.g. Unit, Integration, System, Acceptance, Regression, et al) and implemented in a Production environment.  We now must consider the very important discipline of Source Code Management (SCM), where from a System z Mainframe perspective, common solutions are CA Endevor SCM, Compuware ISPW, IBM SCLM, Micro Focus ChangeMan ZMF, et al.  Once again, from a DevOps perspective, we somehow have to find a workable and seamless balance that will integrate the dawn of business computing from the 1960’s to these rapidly moving 21st Century requirements.  As previously discussed the Dev part of the DevOps framework is well-covered and straightforward, but perhaps the Ops part requires some more considered thought…

Recently Compuware have acquired ISPW (January 2016) to supplement their Topaz Workbench and Micro Focus acquired ChangeMan ZMF (May 2016) to complement their Micro Focus Enterprise Developer solution.  IBM IDz offers out-of-the-box integration for the IBM Rational Team Concert, CA Endevor SCM and IBM SCLM tools.  Clearly there is a significant difference between Source Code Management (SCM) for Distributed Systems when compared with the System z Mainframe, but today’s 21st century business application will inevitably involve interconnected platforms and so a consistent and seamless SCM process is required for accurate and timely application delivery.  In all likelihood a System z Mainframe user has been using their SCM solution for several decades, evolving processes around this solution, which was never designed for Distributed Systems SCM.  Hence the major System z Application Development ISV’s have acquired SCM products to supplement their core capability, but is it really that simple?  The simple answer is no!

Traditionally, for Application Development activities we deployed the Software Development Life Cycle (SDLC), limited to software development phases, including requirements, design, coding, testing, configuration, project management and change management.  Modern software development processes require real-time collaboration, access to centralized data repository, cross-tool and cross-project visibility, proactive project monitoring and reporting, to rapidly develop and deliver quality software.  This requirement is typically classified as Application Lifecycle Management (ALM).

The first iteration of ALM, namely ALM 1.0 was wholly unsuccessful.  Application Development teams were encouraged to consider the value of point solutions for task management, planning testing, requirements, release management, and other functions.  Therefore ALM 1.0 became just a set of tools, where the all too common question for the Application Development team was “what other tool can we use”!

ALM 2.0 or ALM 2.0+ can be considered as Integrated Application Lifecycle Management or Integrated ALM, where all the tools and their users are synchronized with each other throughout the application development stages.  This integration ensures that every team member knows the Who, What, When, and Why of any changes made during the development process, eradicating arduous, repetitive, manual and error prone activities.  The most important lesson for the DevOps team in a customer environment is to concentrate on the human perspective.  They should ask “how do we want our teams to work together and collaborate” as opposed to asking an Application Development ISV team, “what ALM tools do you have”.  Inevitably the focus will be ISV based, as opposed to customer based.  As per the recent Compuware and Micro Focus SCM acquisition history demonstrates, these tools by definition, were never fully integrated from their original inception…

If the customer DevOps teams collaborate and formulate how they want to work together, an ALM evolution can take place in a timely manner, maintaining investment in previous technologies, as and if required.  Conversely, a revolutionary approach is the most likely outcome for the System z Mainframe user, if looking to the ISV for a “turn-key” ALM solution.  By definition, an end-to-end and turn-key ALM solution from one ISV is not possible and in fact, not desirable!  Put another way, as a System z user, do you really want to write off several decades investment in an SCM solution, for another competitive solution, which will still require many other tools to provide the Integrated ALM capability you require?  As always, balance and compromise is the way forward…

If the ubiquitous System z Application Development ISV were to develop their first software product today, it would inevitably be a DevOps and ALM 2.0+ compatible product, allowing for full integration with all other Application Development tools, whether System z Mainframe or Distributed Systems orientated.  Of course that is not the reality.  It seems somewhat disingenuous that the System z Application Development ISV would ask a potential customer to write-off their several decades investment in a SCM technology, when said ISV has just acquired such a technology!  Once again, this is why the customer based Application Development teams must decide how they want to collaborate and what ALM and DevOps tools they want to use.

Seemingly a pragmatic solution is required, hence the ALM 2.0+ initiative.  If an ISV could develop an all-encompassing DevOps and ALM 2.0+ end-to-end Application Development solution for all IT platforms, they would probably become one of the most popular and biggest ISV’s in a short time period.  However, this still overlooks the existing tools that customer IT organizations have used for many years.  Hence, the pragmatic way forward is to build an open DevOps and ALM 2.0+ solution that will integrate with all other Application Development lifecycle tools, whether market place available, or not!  HPE Application Lifecycle Management (ALM) and Quality Centre (QC) is one such approach for Distributed Systems, but what about the System z Mainframe?

IKAN ALM is an ALM 2.0+ and DevOps architected solution that is vendor and platform agnostic.  Put another way, IKAN ALM can operate in single or multiple-vendor mode.  In all likelihood, single vendor mode is unlikely, as there are many efficient Application Development tools in the marketplace.  However, the single most compelling feature of IKAN ALM is its open framework and interoperability with other ALM technologies.  As previously stated, we might consider source code development as the Dev side of the DevOps framework.  IKAN ALM will interface with these technologies, while its core functionality concentrates on the Ops side of the DevOps framework.  Therefore from an Application Lifecycle Management (ALM) viewpoint, the IKAN ALM solution starts where versioning systems end, with an objective of optimizing the entire software engineering process.

IKAN ALM offers a uniquely integrated web-based Application Lifecycle Management platform for both Agile and traditional software development teams.  It combines Continuous Integration and Lifecycle Management, offering a single point of control, delivering support for build and deploy processes, approval processes, release management and software lifecycles.  From a pragmatic and common-sense viewpoint, typically organizations want to continue working with their preferred tools in their preferred environment.  Being ALM 2.0+ compliant, IKAN ALM fully integrates with any versioning tool and any issue tracking tool, providing ALM reports across repositories.  Therefore IKAN ALM offers an evolutionary approach, allowing an organization to leverage from timely ALM benefits, without risk and without the need for displacing any existing technologies.  Over time, should the organization wish to displace older legacy ALM software products, they could so, leveraging from the stand-alone or multiple vendor flexibility of the IKAN ALM solution.

IKAN ALM incorporates ready to use solutions and processes for multiple environments.  These solutions include ALM 2.0+ compliant processes and the necessary scripts to automate the integration with a specific environment, including but not limited to CA Endevor (SCM), CollabNet, HPE ALM/Quality Centre (QC), Oracle Warehouse Builder (OWB), SAP, et al.

The IKAN ALM central server is an open framework web application responsible for User Authentication and Authorization, User Interface Processing, Distributed Version Repository Management and Scheduling Code Builds.  The IKAN ALM agents perform the application build and install functions.

The data repository is an open central database where all administrative data and the audit trail history are stored.  IKAN ALM communicates with the repository using standard JDBC interfaces.  The required JDBC drivers are installed along with the product.  The repository can reside in any RDBMS system, including IBM DB2/UDB, Informix, Microsoft SQL Server, MySQL, Oracle, et al.

Source code is always stored in a Version Control Repository.  IKAN ALM integrates with all the typical versioning systems including Apache Subversion, CVS, Git, Microsoft Visual SourceSafe (VSS), IBM Rational ClearCase (UCM & LT), Serena PVCS Version Manager, et al.  The choice of IDE often drives the choice of the Version Control System (VCS), where organizations can have more than one operational VCS.  IKAN ALM is a solution that provides a unique process control over all versioning systems present in the organization.  IKAN ALM stores each build result within its central server filesystem, labelling the source accordingly in the associated versioning system, guaranteeing a correct source-build relationship.

IKAN ALM safeguards Authentication & Authorization interacting with the organizations security deployment (E.g. Active Directory, LDAP, Kerberos, et al) via the Java Authentication and Authorization Service (JAAS) interface.

IKAN ALM audits any changes (E.g. Who, What, Why, When, Approver, et al), orchestrating the various components and phases of Application Lifecycle Management, delivering an automated workflow to drive a continuous flow of activity throughout the development lifecycle, efficiently coordinating and optimizing application development changes.

In an environment with ever increasing mandatory regulatory compliance requirements, IKAN ALM simplifies the processes for delivering such compliance.  As per the IKAN ALM Build, Deploy, Lifecycle and Approval Management framework, compliance for industry standard regulations (E.g. CMM, ITIL, Sarbanes-Oxley, Six Sigma, et al) is delivered via a reliable, repeatable and auditable process throughout the development life cycle.

Clearly any IT organization can benefit from a fully integrated ALM 2.0+ solution, by enforcing and safeguarding the ALM process is repeatable, reliable and documented.  Regardless of the development team headcount size, ALM releases key people from repetitive and less interesting tasks, allowing them to focus on delivering today’s Analytics based, Cloud, Mobile and Social applications.  A fully integrated ALM 2.0+ solution such as IKAN ALM allows for simplified legacy environment modernization, while simultaneously allowing for experimentation and improvement of all environments alike, both legacy and new.

In conclusion, savvy organizations will safeguard that their Application Development and Operations teams collaborate as per the DevOps framework and decide how they want to implement processes for their environment and more importantly, their business.  This focus should avoid any notion of asking the ubiquitous Application Development ISV, “which tools we should use”!  Similarly, recognizing the integration foundation of ALM 2.0+ will eliminate any notion to displace existing technologies and processes, unless absolutely required.  The need for agile, rapid and quality source code development and delivery is obvious, as is the related solution, which is inevitably pragmatic, evolutionary and multiple vendor tool based.

System z Meets Open Source Linux

Recently IBM launched their LinuxONE offering, packaged in the most powerful and secure enterprise server, namely System z, designed for the new application economy and hybrid cloud era. Although IBM has provided Linux support for the Mainframe server since 2000, this LinuxONE packaging promises a unified portfolio of hardware, software and services solutions for mission-critical Linux applications.

To supplement the existing SUSE and Red Hat support, Ubuntu is included, along with Open Source enablement, including Apache Spark, Chef, Docker, MariaDB, MongoDB, Node.js and PostgreSQL, endeavouring to provide clients with choice and flexibility for hybrid cloud deployments.

From a big picture viewpoint, LinuxONE can be summarised as:

  • Linux Your Way: Choose the Linux environment and tools for your organization
  • Linux Without Limits: Benefit from Enterprise Class Linux support
  • Linux Without Risk: Safeguard business applications with the secure and resilient System z Server

The LinuxONE Systems are classified as Emperor and Rockhopper, loosely classified as High-End and Entry-Level System z servers. LinuxONE Emperor delivers ultimate flexibility, scalability, performance and security trust for mission-critical applications. Scalability is as per the latest z13 server, allowing growth to handle the most demanding workloads. LinuxONE Rockhopper delivers the entry point into the LinuxONE family, offering all the same great capabilities and value, with the flexibility of a smaller package.

LinuxONE includes a choice of hypervisors and management tools, namely KVM for LinuxONE and/or IBM z/VM. This virtualization capability claims support for up to 8000 virtual servers (several thousand containers) in a single System z server footprint, allowing for parallel processing of Test, Development and Production environments. Additionally, new servers and containers can be initialized and running in minutes, with automated resource provisioning and reallocation in seconds.

From a performance viewpoint, System z metrics apply; fast CPU processors, significant I/O capability and 10 TB Memory, all delivering consistent and predictable sub-second response times for thousands of users. A reported capability of 30 Billion RESTful web transaction per day, with ~500,000 database read/write operations per second.

The LinuxONE offering is also a key component of the IBM Cloud, Analytics, Mobile & Security (CAMS) framework:

  • Cloud: An agile and trusted cloud infrastructure to meet new business demands with greater efficiency and lower costs for IT service delivery. Example cloud usage includes Database, Enterprise Systems of Record and Hybrid Platform cloud platforms.
  • Analytics: Flexible, resilient, high performance business and operational analytics for Business Intelligence, Big Data Insights and Operational Analytics for intelligent and continuous business availability.
  • Mobile: Build a premier mobile solution for your business to deliver the best possible experience for your clients, employees and partners alike. Facilitate agile development and deployment of mobile applications, with secure end-to-end mobile transactions, personalized via integrated data analytics.
  • Security: System z has been associated with the highest EAL5+ Common Criteria certification for many years, safeguarding mission-critical data from cradle-to-grave. Security functions such as full data encryption, cryptographic processors and end-to-end security, combined with the unmatched reliability and availability of the System z server, safeguarding mission-critical data and services are fully protected and available.

Finally and a key point, LinuxONE promises TCO optimization with pricing your way. A straightforward menu of pricing options include:

  • A fixed monthly cost usage model for hardware and software resources
  • A per core software pricing model, with 30 days notice for cancellation or resource change
  • A 36 month rental option, with buy/replace/return options at contract end

In theory, LinuxONE could be perceived as just a tweak of existing System z Linux options, including the most recent z13 server, Ubuntu and Open Source support. What has changed are user requirements, the requirement for flexible and agile computing, where Cloud, Analytics, Mobile and Security dominate many CIO agendas.

It is my hope that each and every CIO, System z literate or not, at least considers the LinuxONE platform for their mission-critical enterprise workload, as from a simplistic viewpoint, LinuxONE is just another ubiquitous black server box; or is it…

How Can We Energize Our Emerging zCommunity?

No doubt we have all experienced that most things in life and business are cyclical, hence the terms déjà vu, those who cannot remember the past are condemned to repeat it, et al…

For System z, with the glass half-full, there are encouraging signs of pragmatic and collaborative executive leadership from the supplier ecosystem; for example, BMC, Compuware and IBM collaborating on a Standard Software Product Install Methodology For All Vendors. With the glass half-empty, even though there are proven statistics to demonstrate the penetration of System z in global large organizations, there are still some misplaced legacy perceptions associated with System z, from significant executive leaders.

Just as the IBM Mainframe automated business processes more than several decades ago, introducing IT into the business workplace forever more, we’re currently undergoing another IT revolution. Quite simply, an exponential growth in data, typically associated with Cloud, Analytic, Mobile & Social technologies. With this in mind, we should always be mindful that an IT solution should solve a business challenge and/or provide value for a business requirement. Therefore, the business themselves are best placed to articulate the framework and ultimate size and shape of solutions delivered by the vendor community.

The IBM Mainframe environment has always benefitted from User Groups that conceptually represent the customer, articulating requirements to IBM for future IBM Mainframe enhancement. For the avoidance of doubt, SHARE in The USA, celebrating its 60th anniversary in 2015, with SHARE Europe, the forerunner to GSE, being founded in 1959. These groups are the ideal forums for collecting and articulating user requirements to IBM, for IBM Mainframe and current System z evolution. Without doubt, there has been a resurgence in support for SHARE USA and GSE events in the last decade or so, but from a dispassionate viewpoint, how many IBM Mainframe customers are members of these User Groups?

As previously referenced, the executive leadership of major System z Mainframe vendors are demonstrating a willingness to collaborate. Perhaps now is an ideal time for the System z Mainframe customer to articulate their requirements to the major System z Mainframe vendors?

My admiration for those volunteers that contribute their time, knowledge and passion to User Groups such as SHARE and GSE is without doubt. I’m also positive that these User Groups would welcome the opportunity to represent a larger number of System z end users, which would no doubt generate more end user presentations at conferences, supplemented by generic and business orientated user requirements for System z ecosystem vendors to consider. This can only happen if the end users of the IBM System z Mainframe platform embrace this opportunity to shape the future of the System z Mainframe, as it rapidly evolves, both in technological advancement and an emerging willingness for collaboration from vendors.

Having worked with IBM Mainframes for over 30 years, I’m no longer surprised about the quality and professionalism of personnel I encounter at user sites. A granularity of knowledge can sometimes be applied, with all-rounders demonstrating savvy technical and commercial knowledge at small capacity installations and Subject Matter Experts (SME), typically in larger capacity installations, demonstrating level 3 diagnostic capability. In an ideal world, the executive leadership at these System z Mainframe user sites should also participate in a forum of like-minded peers, allowing them to embrace and value the System z platform. There are certainly such Senior Management streams at SHARE and GSE events, but once again, if the System z end user isn’t a User Group member and/or doesn’t attend these events…

In our real life domestic environments, we can lobby our local government official (Member Of Congress/Parliament, MC/MP, et al), allowing for generic or specific representation for all people alike. In theory, in an evolving IT world, there is no reason why a System z Mainframe user can’t lobby a vendor for a user requirement. As always, no one of us, is as good as all of us! Therefore just as System z Mainframe vendors are collaborating, as and when practicable, now is the time for the System z Mainframe end users to collaborate, no matter how large or small, for the benefit of all. Given that the forums for collaboration already exist, for example SHARE USA and GSE, System z end users can easily leverage from these User Groups, to generate a coherent and notable voice.

Wouldn’t it be fantastic if 80%+ of System z Mainframe end users were User Group (E.g. SHARE, GSE) members and several of their technicians and one senior manager attended their local annual conference? The cost, minimal, the value, arguably priceless!

From my own viewpoint, I have recent real-life experience of engaging a major System z vendor, with a commercial user requirement collected from tens of smaller capacity Mainframe users, where said submission is being considered. This is perhaps a brave new world…

CICS: The Best Enterprise Transaction Server & So Much More…

In one form or another, CICS has been available since the mid-1960’s, just about as long as the IBM Mainframe server that recently celebrated its 50th anniversary, released in 1964.  From a deployment viewpoint, 90%+ of Fortune 500 companies deploy CICS, primarily for its robust and often unbeatable ability to deliver sub-second response times for numerous application transactions.  Whether a large or small IBM Mainframe user, CICS delivers an enterprise class solution for a myriad of business types and arguably at one time or another, nearly every committed IBM Mainframe installation has implemented CICS.

In the past few decades I have encountered many failed IBM Mainframe migration projects and more often than not, the primary reason for platform migration failure was the inability of the target platform to deliver consistent sub-second transaction response times for a plethora of mission critical business applications.  Similarly, it often follows that if a non-Mainframe environment has been heavily configured to handle the CICS transaction workload, it often fails with the subsequent batch processing, suffering from significantly elongated elapsed times.

CICS has its foundations as an enterprise class transaction server, capturing data input for subsequent storage and retrieval in Database Management Subsystems, but let’s not forget, CICS can do so much more…

Let’s not forget, in 2001 CICS Transaction Server (TS) 2.1 for z/OS introduced the foundation for web services support, and a capability for CICS transactions to be invoked via HTTP requests.  There have been numerous enhancements since, too many to mention, which have evolved CICS into a fully-rounded family of solutions, allowing for cradle to grave application design and delivery.  However, let’s just take some time to review what CICS TS Version 5 has delivered, and how this might benefit the 21st Century business.

Recognizing the IBM defined Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative, CICS is integral to such a business facing initiative, primarily from a Cloud interoperability viewpoint.

The CICS V5.2 Application Server has a capability to host multiple applications and multiple versions of the same application, simultaneously, primarily due to the substantial increase of platform scalability.  Similarly, a heterogeneous code environment offers application development personnel a single environment to work seamlessly with Java and other legacy languages, such as COBOL, C/C++, PL/I, et al.

Combining this heterogeneous code environment with Cloud enablement allows for new application version deployment, without a requirement to disable or remove the previous version from Production processing.  Regardless of the underlying application source code base, end users can access an application without service interruption, as they transition to the new application version.  Similarly, user workloads can be seamlessly redirected to a previous working version of application code, presuming new version errors.

The CICS V5.2 application server delivers a powerful hosting environment for all business applications, new or old.  Application Development teams can provision applications for design and testing within a “real life working” infrastructure, removing the application when testing is complete, or promoting said application ASAP, for mission critical business usage.  This delivers a significant business improvement in application availability, minimizing service downtime.  Therefore, applications stay as current and relevant as possible, reducing the risk of business service outages, delivering better and consistent end user experiences.

As per the IBM zSeries Mainframe heritage, a standard resilience feature of the CICS V5.2 Application Server is an inherent capability to perform, even in the event of a problem scenario.  Cloud enablement delivers a clustering capability, which handles both system and application level failure scenarios.  Seamless and timely problem resolution dictates less down time, delivering more business availability, instilling a high sense of confidence in end users and consumers alike.

Noting the Security aspect of the IBM CAMSS initiative and the ever present cybersecurity risk to us all, CICS Application Server also delivers on this front.  Safeguarding that application enhancements can be brought online ASAP and securely, CICS V5.2 Application Server seamlessly integrates with various security software and languages, including the latest WebSphere Application Server (WAS) Liberty Profile, allowing for the portability of Java Enterprise Edition Web applications.  Enhanced security capabilities also include Java Naming and Directory Interface (JNDI), Bean Validation, JDBC Type 2 Data Sources and the Java Transaction API (JTAPI).  SSL support incorporated within the Liberty JVM server HTTP listener is extended to support key certificates stored in System Authorization Facility (SAF) key rings, delivering SSL server authentication.

Like it or not, Cloud computing is a rapidly evolving technology, where the Cloud is integrating increasingly more applications and associated services, delivering cost savings and scalability accordingly.  Of course, for true enterprise class scalability and cost efficiency, arguably the IBM zSeries Mainframe is an ideal platform for Cloud technologies.  Therefore with some slightly modified thinking, organizations can deploy Cloud based solutions and benefit from application promotion benefits, especially with a technology such as CICS.

There is a great presentation named Five Compelling Reasons for Creating a CICS Cloud that provides robust working examples of how to increase application availability, with real-life application development scenarios.

In conclusion, CICS continues to evolve and not only is it the best Enterprise Class Transaction Server, the family of CICS products including its Application Server deliver a 21st Century Cloud computing compatible platform, for the most demanding of business requirements.  Whether considering the IBM defined Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative or the more traditional Reliability, Availability and Serviceability (RAS) attributes of the zSeries Mainframe server, the latest version of CICS facilitates:

  • Rapid Application Development: Agile methodologies for rapid development, irrespective of programming language (E.g. Java, COBOL, C/C++, PL/I, et al).
  • Seamless & Timely Application Deployment: More frequent application updates, minimizing downtime and associated cost, while leveraging from Cloud functionality, to deploy new applications, application enhancements or bug fixes, side-by-side with existing real-life Production workloads.

z13: A Digital Business Ready Solution?

As per the usual next generation zSeries Server release, IBM announced their latest evolution on 13 January 2015, namely the z13. IBM describe this platform as the most powerful and secure system ever built:

  • First system able to process 2.5 billion transactions per day, built for mobile economy
  • Makes possible real-time encryption on all mobile transactions at scale
  • First mainframe system with embedded analytics providing real time transaction insights 17X faster than compared competitive systems at a fraction of the cost

At first glance, feeds and speeds generally don’t enthuse the audience, but if we dig deeper and acknowledge other recent IBM developments incorporating Apple, Twitter and Data Analytics announcements, we perhaps can draw some better business-facing conclusions. IBM have a clearly defined Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative, seemingly based upon the IDC 3rd platform defined as Social, Mobile, Analytics & Cloud (SMAC).

Industry analysts predict that in the next 3 years and by 2017, SMAC (CAMSS) expenditure will account for 25%+ of total enterprise software market revenue, doubling from ~12% in 2012. In simple terms, this new expenditure opportunity represents $100+ Billion revenue. We can imagine that all major ISV’s will be wanting their share of this market…

Whichever classification you choose, IBM CAMSS or IDC SMAC, IT infrastructures and associated investment currently are and certainly will be heavily influenced by this new world computing paradigm. Like it or not, an ability to perform a transaction anywhere (Mobile), keeping everything simple and networked (Social Media), real time prediction of future customer requirements (Analytics), available anywhere (Mobile), for an alleged fraction of the cost (Cloud), makes sense for the 21st Century business. Ignore this new technology evolution at your peril as it will impact each and every area of the IT enterprise and associated resources, primarily software and supporting hardware.

Did you notice the difference between the IBM classification and IDC? IDC have not considered Security to be a consideration factor worthy of acronym (SMAC) inclusion. In today’s world of cybersecurity, that might be somewhat of an oversight, but we must assume that IDC consider cybersecurity to be a consideration for all of the Analytics, Cloud, Mobile & Social aspects, which of course it is!

If we consider the relative merits of technology platforms from a security viewpoint, the IBM z13 delivers EAL5+ security certification, whereas other non-Mainframe platforms can only currently claim EAL4+ certification.

It is estimated that 55%+ of enterprise (mission critical) transactions are processed by the IBM Mainframe, but this is based on pre mobile workloads. It therefore makes commercial sense for IBM to safeguard their flagship platform not only maintains the existing IBM Mainframe customer base, but captures new and mobile centric workloads.

Having considered the business requirements for today’s IT business, let’s now classify the new features of the z13 platform:

  • Up to 40% more total system capacity compared to the zEC12.
  • Up to 10 terabytes (TB) of available Redundant Array of Independent Memory (RAIM) real memory per server.
  • Cryptographic performance improvements with new Crypto Express5S.
  • Economies of scale with simultaneous multithreading delivering more throughput for Linux and zIIP-eligible workloads.
  • Improved performance of complex mathematical models, perfect for analytics processing, with Single Instruction Multiple Data (SIMD).
  • IBM zAware cutting-edge pattern recognition analytics for fast insight into system health extended to Linux on z Systems.
  • A reduction in elapsed time for I/O-bound batch jobs with new FICON Express16S versus FICON Express8S.
  • Support for larger memory configurations planned to be supported on z/OS systems, which can be used to improve transaction response times, lower CPU costs, simplify capacity planning and ease deploying memory-intensive workloads. (The IBM z13 offers up to 10 TB memory.)
  • I/O service time improvement when writing data remotely using the new zHPF Extended Distance II.
  • Support for up to 256 coupling CHPIDs, which provides enhanced connectivity and scalability for a growing number of coupling channel types.
  • IBM Integrated Coupling Adapter (ICA SR), which offers greater short reach coupling connectivity than existing link technologies and enables greater overall coupling connectivity per IBM z13 than prior server generations.
  • Capability to extend z/OS workload management policies into the SAN fabric.
  • New rack-mounted Hardware Management Console (HMC), helping to save space in the data center.
  • Non-raised floor option, offering flexible possibilities for the data center.
  • Optional water cooling, providing the ability to cool systems with user-chilled water.
  • Optional high-voltage dc power, which can help IBM z Systems clients save on their power bills.
  • Optional top exit power and I/O cabling designed to provide increased flexibility.
  • New IBM z BladeCenter Extension (zBX) Model 004 in support of heterogeneous resources managed by IBM z Unified Resource Manager.

As we all know, Moore’s Law had to end sometime soon and this is true for System z CPU chips. The zEC12 CPU was often claimed to be the fastest commercial processor, with a 32nm core and a 5.5 GHz rating. The z13 chip runs a 22 nm core at a 5 GHz, at first glance ~10% slower than the zEC12. The new z13 chip delivers a ~10% performance increase, due to advances in core design, with better branch prediction and pipelining in the core. Noteworthy, is the slightly slower clock speed of the z13 chip, reducing heat output, probably signifying that ~5 GHz is the ceiling for CPU chips in the near future.

However, for z13, the doubling of performance still apples for many other resources:

  • Cryptographic coprocessors performance (~2*)
  • Channel speed (~2*)
  • I/O bandwidth (~2*)
  • Memory/Cache performance (~2*)
  • Memory capacity (~3*)

Once again, classifying these technological advances in terms of mobile business, the z13 delivers real-time encryption of mobile transactions, protecting transaction data, delivering consistent response times for a quality customer experience. Overall, IBM claims the z13 delivers a potential for ~36% better response time, ~61% better throughput and ~17% lower cost per mobile transaction.

A major and subtle change introduced with the z13 is Simultaneous MultiThreading (SMT). SMT allows 2 active instruction streams per core, each dynamically sharing the core’s execution resources. SMT will be available in IBM z13 for workloads running on the Integrated Facility for Linux (IFL) and the IBM z Integrated Information Processor (zIIP).

Each software Operating System/Hypervisor has the ability to intelligently drive SMT in a way that is best for its unique requirements. z/OS SMT management consistently drives the cores to high thread density, in an effort to reduce SMT variability and deliver repeatable performance across varying CPU utilization, thus providing more predictable SMT capacity. z/VM SMT management optimizes throughput by spreading a workload over the available cores until it demands the additional SMT capacity.

From a capacity planning and performance measurement viewpoint, just a slight note of caution. Although the z13 CPU chip delivers increased CPU capacity, the raw speed is slower and there are considerations for SMT. A former IBM staffer, Bob Rogers has written a great article on this SMT subject matter, which should be on your reading list!

In conclusion, the z13 announcement is another step forward for zSeries Mainframes. If you consider this announcement as just another next generation zSeries Mainframe announcement, you’re not treating your business or yourself with the respect they deserve. Instead, please consider this z13 announcement as an evolution from an enterprise solution delivery viewpoint. Primarily, consider the 21st century business keywords, in no particular order, of Analytics, Cloud, Mobile, Social & Security.

Cloudy With A Chance Of Mainframe?

With the advent of Computer Generated Imagery (CGI) there is seemingly no end to the number of books, especially “children’s” books that can be encapsulated and delivered in animated movie format.  I’m always surprised and arguably never surprised by the messaging in these stories; supposedly written for the younger person, but invariably delivering a message of good morals, ethics and human qualities, typically finding creative solutions to a myriad of problems.  Of course, we’re all human, and typically as human beings, we’re responsible for the majority of our problems, either knowingly, or not.

Cloudy with a Chance of Meatballs is a book based on a town named Chewandswallow characterized by its strange daily meteorological pattern, providing townsfolk with all of their required daily meals by raining food.  Although the residents of the town enjoy a lifestyle devoid of any grocery shopping or cookery, the weather unexpectedly and inexplicably takes a turn for the worse, devastating the local community with destructive and uncontrollable storms of either unpleasant or dangerously oversized foods, resulting in unstoppable catastrophes for the townspeople.  Their lives endangered by the threats of the storms, they relocate to a different community of average meteorological patterns, safe from the hazards that once were presented by raining meals.  However, they are forced to learn how to obtain food the normal way.

So what?  Continuing with the creativity thought, the ethos of this story might be somewhat analogous to the sometimes polarized opinion between Distributed Systems and Mainframe computing.  So depending on your philosophical bent or which side-of-the-fence you sit, there is only one choice, even if this seemingly perfect and de facto world is generating significant challenges… 

Recently, z/OS 2.1 became Generally Available (GA) and most notably from my viewpoint was its continued and demonstrable ability to participate in cloud computing environments.  So is the IBM Mainframe ready for the cloud?  Wasn’t it always!

The fundamental ethos of the Mainframe environment is virtualization and was forever thus.  The Mainframe has always shared the basic IT architecture components, including CPU, Memory, Storage, Networking and other peripherals, originally in a physical single-image structure, but since the late 1990’s in a shared (SYSPLEX) complex of interconnected physical servers (CPCs).  So the Mainframe is and always has been ready for “Prime Time Cloud”!

z/OS V2.1 is a platform designed to dynamically respond and scale to workload change with enhancements to scalability and performance that cover operations, I/O, virtual storage constraint relief, memory management, and more.  These enhancements are suitable for organizations that would like to catalyse a journey to highly scalable virtualized solutions like cloud.

IBM delivers improved scalability and performance for outstanding throughput and service within existing Mainframe environments.  Smarter scalability can better prepare the user for growth and spikes in workloads while maintaining the qualities of service and balanced design that customers have come to expect of the IBM mainframe.

As customers consider all the components of downtime, the true costs can be surprising, which is why superior availability continues to remain a key factor in platform selection. With z/OS V2.1, IBM introduces new capabilities designed to improve upon the already legendary z/OS system availability.  The industry-leading resiliency and high availability of System z remain key reasons why organizations keep their most critical processing on System z.  With its attention to outage reduction, the availability of System z and z/OS is well recognized in the industry.  In z/OS V2.1, IBM continues enhancements that improve critical IT systems availability, helping achieve an even higher level of service for customers.

Some of the “cloud friendly” z/OS 2.1 benefits include:

  • Support for Shared Memory Communications-RDMA (SMC-R), for low latency, application transparent communications to help you move data quickly between z/OS images on the same CPC or between CPCs.
  • Flash Express support for certain coupling facility list structures, such as IBM WebSphere MQ for z/OS, V7 (5655-R36), in order to strengthen resiliency for enterprise messaging workload spikes.
  • For zEC12 or zBC12 systems, shared engine coupling facilities can be used in many production environments, for improved economics by offering a high level of performance without requiring the use of dedicated CF engines.
  • EXCP support for System z High-Performance FICON (zHPF) is designed to help improve I/O start rates and improve bandwidth for more workloads on existing hardware and fabric.
  • Usability and performance improvements for z/OS FICON Discovery and Auto Configuration (zDAC), including discovery of directly attached devices.
  • Serial Coupling Facility structure rebuild processing, designed to help improve performance and availability by rebuilding coupling facility structures more quickly and in priority order.
  • 100-way symmetric multiprocessing (SMP) support in a single LPAR on IBM zEC12 or zBC12 systems.  Support for an architectural limit of 4 TB of real memory per LPAR.
  • Support for 2 GB pages is provided on zEC12 and zBC12 systems.  This feature is designed to reduce memory management overhead and improve overall system performance by enabling middleware to use 2 GB pages.  These improvements are expected due to improved effective translation lookaside buffer (TLB) coverage and a reduction in the number of steps the system must perform to translate a 2 GB page virtual address.
  • Capacity Provisioning is designed to provide support for manual and policy-based management of Defined Capacity and Group Capacity.  This function broadens the range of automatic, policy-based responses available to help manage capacity shortage conditions when WLM cannot meet your workload policy goals.

There are numerous new and enhanced functions delivered with z/OS 2.1, too numerous to mention, but categorised as Quality Of Service, Availability, Networking, Security, Data Usability, Integrity, Systems Management, Application Development, Simplification & Usability, International Standards Compliance, et al.

So let’s not forget, this foundation and support for an IT infrastructure and its supporting eco (software) system is in one scalable, secure and “zero” downtime environment!

So maybe for us open-minded and enlightened generation of parents (oops, I forgot, Grandparents for us Dinosaur Mainframe folk!) that can now “access” children’s stories, even if it’s in the form of a CGI animated movie, maybe we can be dispassionate enough to consider all platforms, Distributed and Mainframe for our evolving business and associated IT requirements. 

So you decide, can it be Cloudy With A Chance Of Mainframe?  To overlook such an option, might be an oversight, just as overlooking the abundance of human stories, classified as children’s books or not…