z Systems Software & Applications Currency: MQ Continuous & as a Service Delivery Models

In a rapidly-moving technology environment where DevOps is driving innovation for the rapid delivery of applications, are there any innovations for the related z Systems infrastructure (E.g. z/OS, CICS, DB2, IMS, MQ, WebSphere AS) that can deliver faster software and indeed firmware updates?

In April 2016, IBM announced MQ V9.0, delivering new and enhanced capabilities facilitating a Continuous Delivery and support model.  The traditional Long Term Support release offers the ubiquitous collection of aggregated fix-packs, applied to the delivered MQ V9.0 function.  The new Continuous Delivery release delivers both fixes and new functional enhancements as a set of modification-level updates, facilitating more rapid access to functional enhancements.

Form a terminology viewpoint, the Continuous Delivery (CD) support model, introduces new function and enhancements, made available by incremental updates within the same version and release.  Additionally, there will also be a Long Term Support (LTS) release available for deployments that require traditional medium-long term security and defect fixes only.  Some might classify such LTS fixes as Service Pack or Level Set patching.  The Continuous Delivery (CD) support approach delivers regular updates with a short-term periodic frequency for customers wanting to exploit the latest features and capabilities of MQ, without waiting for the next long term support release cycle.  In terms of timeframe, although there is no fixed time period associated with a CD or LTS release, typically CD is every few months, while LTS releases are every two years or so.  In actual IBM announcement terms, the latest MQ release was V9.0.3 in May 2017, meaning four MQ V9.0.n release activities in a ~13 Month period, approximately quarterly…

The benefits of this CD support model are obvious, for those organizations who consider themselves to be leading-edge or “amongst the first”, they can leverage from new function ASAP, with a modicum of confidence that the code has a good level of stability.  Those customers with a more cautious approach, can continue their ~2 year software upgrade cycle, applying the LTS release.  As always with software maintenance, there has never been a perfect approach.  Inevitably there will by High Impact or PERvasive (HIPER) and PTF-in Error (PE) PTF requirements, as software function stability is forever evolving.  Therefore, arguably those sites leveraging from the latest function have always been running from a Continuous Delivery software maintenance model; they just didn’t know when and how often!

Of all the major IBM z Systems subsystems to introduce this new software support model first, clearly the role of MQ dictates that for many reasons, primarily middleware and interoperability based, MQ needs a Continuous Delivery (CD) model.

At this stage, let’s remind ourselves of the important role that MQ plays in our IT infrastructures.  IBM MQ is a robust messaging middleware solution, simplifying and accelerating the integration of diverse applications and business data, typically distributed over multiple platforms and geographies.  MQ facilitates the assured, secure and reliable exchange of information between applications, systems, services, and files.  This exchange of information is achieved through the sending and receiving of message data through queues and topics, simplifying the creation and maintenance of business applications.  MQ delivers a universal messaging solution incorporating a broad set of offerings, satisfying enterprise requirements, in addition to providing 21st century connectivity for Mobile and the Internet of Things (IoT) devices.

Because of the centralized role that MQ plays, its pivotal role of interconnectivity might be hampered by the DevOps requirement of rapid application delivery, for both planned and unplanned business requirements.  Therefore even before the concept of MQ Continuous Delivery (CD) was announced in April 2016, there was already talk of MQ as a Service (MQaaS).

As per any major z Systems subsystem, traditionally IBM MQ was managed by a centralized messaging middleware team, collaborating with their Application, Database and Systems Management colleagues.  As per the DevOps methodology, this predictable and centralized approach, does no lend itself to rapid and agile Application Development.  Quite simply an environment management decentralization process is required, to satisfy the ever-increasing speed and diversity of application design and delivery requests.  By definition, MQ seamlessly interfaces with so many technologies, including but not limited to, Amazon Web Services, Docker, Google Cloud Platform, IBM Bluemix, JBoss, JRE, Microsoft Azure, Oracle Fusion Middleware, OpenStack, Salesforce, Spark, Ubuntu, et al.

The notional concept of MQ as a Service (MQaaS), delivers a capability to implement self-service portals, allowing Application Developers and their interconnected Line of Business (LOB) personnel to drive changes to the messaging ecosystem.  These changes might range from the creation or deletion of a messaging queue to the provision of a highly available and scalable topology for a new business application.  The DevOps and Application Lifecycle Management (ALM) philosophy dictates that the traditional centralized messaging middleware team must evolve, reducing human activity, by automating their best practices.  Therefore MQaaS can increase the speed at which the infrastructure team can deliver new MQ infrastructure to their Application Development community, while safeguarding the associated business requirements.

MQ provides a range of control commands and MQ Script Commands (MQSC) to support the creation and management of MQ resources using scripts.  Programmatic resource access is achievable via MQ Programmable Command Format (PCF) messages, once access to a queue manager has been established.  Therefore MQ administrators can create workflows that drive these processes, delivering a self-service interface.  Automation frameworks, such as UrbanCode Deploy (UCD), Chef and Puppet functions can be used to orchestrate administrative operations for MQ, to create and manage entire application or server environments.  Virtual machines, Docker containers, PureApplication System and the MQ Appliance itself can be used alongside automation frameworks to create a flexible and scalable ecosystem, for delivering the MQaaS infrastructure.

Integrating the MQ as a Service concept within your DevOps and Application Lifecycle Management process delivers the following benefits:

  • Development Agility: Devolving traditional MQ administration activities to Application and Line of Business personnel, allows them to directly provision or update the associated messaging resources. This optimizes the overall process, while DevOps processes facilitates the requisite IT organization communication.
  • Process Standardization: Enabling a self-service interface to Application and Line of Business personnel delivers a single entry point for messaging configuration changes. This common interface will leverage from consistent routines and workflows to deploy the necessary changes, enforcing standards and consistency.
  • Personnel Optimization: Self-service interfaces used by Application and Line of Business personnel allow them to focus on core application requirements, primarily messaging and Quality of Service (QoS) related. In such an environment, the background process of performing the change is arbitrary, timely change implementation is the most important factor.
  • Environment Interoperability: An intelligent and automated self-service interface allows for dynamic provisioning of systems and messaging resources for development and testing purposes. This automation can simplify the promotion of changes throughout the testing lifecycle (E.g. Development, Test, Quality Assurance, Production, et al).  As and when required, such automation can provide capacity-on-demand type changes, dynamically scaling an application, as and when required, to satisfy ever-changing and unpredictable business requirements.

In conclusion, DevOps is an all-encompassing framework and one must draw one’s own conclusions as to whether software update frequency timescales will reduce for major subsystems such as CICS, DB2, IMS and even the underlying Operating System itself, namely z/OS.  Conversely, the one major z Systems subsystem with so many interoperability touch points, namely MQ, is the obvious choice for applying DevOps techniques to underlying subsystem software.  For MQ, the use of a Continuous Delivery (CD) software support model safeguards that the latest new function and bug fix capability is delivered in a timely manner for those organizations striving for an agile environment.  Similarly, the consideration of devolving traditional MQ systems administration activities, via intelligent, automated and self-service processes to key Application and Line of Business personnel makes sense, evolving a pseudo MQaaS capability.

Blockchain: A New Application Development Paradigm – What About System z?

Since the inception of Data Processing and the advent of the IBM Mainframe there has been a progressive movement to deliver the de facto “System Of Record (SOR)”, typically classified as a centralised database and related applications.  The key or common denominator for this “Golden Record” is somewhat arbitrary, but more often than not, for most businesses, it will be customer or product identity related.  The benefit of identifying and establishing an SOR is the reuse of this data, for a multitude of different business usage scenarios.

From an application programming viewpoint, historically there was a structured approach when delivering new business function, whether with bespoke programs or Commercial Off the Shelf (COTS) software packages.  More recently data analytics has accelerated this approach, where new business opportunities can be identified from data trends, with near real-time processing, while DevOps frameworks allow for rapid application delivery and implementation.  However, what if there was a new approach with a different type of database and as a consequence, a new approach to application programming?

From a simplistic viewpoint, Blockchain architecture is analogous to traditional database processing, whereas the interaction with said Blockchain database is vastly different, changing from a centralised to decentralised focus.  Therefore for application developers, Blockchain is a paradigm shifting architecture, in how software applications will be architected and coded.  Recognition of this new and rapidly emerging computing paradigm is of vital importance, because it’s the cornerstone for the creation of decentralised applications, a logical and natural evolution from distributed computing architectural constructs.

If we take some time to step back from the Information Technology world and consider the possibilities when comparing a centralised versus decentralised approach, the realm of possibility exists for a truly global interconnectivity approach, which isn’t limited to a specific discrete focus (E.g. Governance, Market, Business Sector, et al).  In theory, decentralised applications might deliver a dynamic and highly collaborative business approach…

A Blockchain is a pseudo linear container space (block) to store data for “controlled public usage”.  In theory, with the right credentials, this data can be accessed by any user!  The Blockchain container is secured with the originators key, so only the key holder or authorised program can unlock the container data.  This is the fundamental difference between a database and a Blockchain.  For a Blockchain, the header record can be considered “eligible for Public usage”.

The data stored within a Blockchain might be considered as a “token”, the most obvious implementation being Bitcoin.  Generically, Blockchain might be considered as an alternative and flexible data transfer system that no private or public authority and especially a malicious third party can tamper with, because of the encryption process.  Put really simply, the data header has “Public” visibility, but data access requires “Private” authenticated access.

From a high-level viewpoint, Blockchain can be considered as an architectural approach, connecting an infinite a number of peer computers, collaborating with a generic process for releasing or recording data, based upon cryptographic transactions.

One must draw one’s own conclusions as to whether this Centralised to Distributed to Decentralised data and application programming approach is the way forward for their business.

Decentralised Consensus is the inverse of a centralised approach where one central database was accessed to validate transaction processing.  A decentralised scheme transfers authority and trust to a decentralised virtual network, enabling processing nodes to continuously access or record transactions within a public block, creating a unique chain for modification operations, hence the Blockchain terminology.  Each successive data block contains a unique fingerprint (hash) of the previous code.  The basic premise of cryptographic processing applies, where hash codes are used to secure transaction origination authentication, eliminating the requirement for centralised processing. Duplicate transaction processing is eliminated because of Blockchain and associated cryptographic processing.

This separation of consensus (data access) from the actual application itself is the fundamental building block for a decentralised application programming approach.

Smart Contracts are the building blocks for decentralised applications.  A smart contract is a small self-contained program that you entrust with a value unit (token) and associated rules.  The simple philosophy of a smart contract is to programmatically facilitate transactional contractual governance between two or more parties via the Blockchain.  This eliminates the requirement of an arbitrary 3rd party authority for governance, when two or more parties can agree exchange between themselves.  Even today, this type of approach is not unusual between organizations, typically based upon a data (file) interchange standard (E.g. Banking).

Put simply, smart contracts eliminate the requirements of 3rd party intermediaries for transaction processing.  Ideally, the collaborating parties define and agree the required policy, embedded inside the business transaction, enabling a self-managed process between nodes (computers) that represent the reciprocal interests of the associated users and owners.

Trusted Computing combines the architectural foundations of Blockchain, decentralised consensus and smart contracts, enabling the spread of resources and transactions with a trusted “peer-to-peer” relationship, in theory enabling trust between numerous nodes (computers).

Previously institutions and central organizations were necessary as trusted authorities.  Deploying a Blockchain approach, these historical centralised central functions can be simplified via smart contracts, governed by decentralised consensus within a Blockchain.

Proof of Work is an important concept to identify the unequivocal authenticator of transactions, allowing the authorised access to participate in the Blockchain system.  Proof of work is a fundamental building block because once created, it cannot be modified, being secured by cryptographic hashes that ensure its authenticity.  Usability challenges ensue, preventing users from changing Blockchain records, without reprocessing the “proof of work”.

It therefore follows, proof of work will be expensive to maintain, with likely future scalability and security issues, depending on the data user (miner) requirements and incentives, which in all likelihood, will reduce over time.  As we all know, most data access is high when data has been recently created, rapidly decreasing to low or even null after a limited period of time.

Proof of Stake is a more elegant and alternative approach, determining which user can update the consensus, while preventing unwanted forking of the underlying Blockchain, being a more cost efficient approach, while being more difficult and expensive to compromise.

Once again, if we consider the benefits of Blockchain from a business processing viewpoint, there is a clear and present opportunity to eliminate manual or semi-automated processes, both internal and external to the business.  This could expedite the completion of processes that previously required days or even weeks to complete and the potential for human error.  A simple example might be a car purchase, based upon 3rd party finance.  Such a process typically includes 3rd party data requirements, for vehicle provenance, credit scoring, identity proof, et al.  If the business world looks at the big picture, they can simplify and automate their processes, by collaborating with existing and more likely, yet to be identified partners.  The benefits are patently obvious…

From a System z viewpoint, recent technological developments leverage from existing IBM resources, including the LinuxONE, Bluemix and Watson offerings:

  • LinuxONE: The System z and LinuxONE platforms are best placed to drive Blockchain innovation, arguably via the Open Mainframe and Hyperedger IBM supports testing and development of the open Blockchain fabric code for developers on their LinuxONE Community Cloud.
  • Bluemix: the IBM Blockchain services available on Bluemix, developers can access fully integrated DevOps tools for creating, deploying, running and monitoring Blockchain applications on the IBM Cloud.
  • Watson: Leveraging from the Watson IoT Platform, IBM will enable information from devices such as RFID-based locations, barcode-scan events or device-reported data, to be used within the IBM Blockchain. Devices will be able to communicate to Blockchain based ledgers to update or validate smart contracts.

From a business benefits viewpoint, the IBM System z platform is ideally placed for Blockchain deployment, being a highly secure EAL5+ certified platform.  Hardware accelerators deliver high speed secure encryption and hashing, supplemented by tamper-proof security Crypto Express modules for key management.  Numerous memory resident partitions can also be created rapidly to keep ledgers separate and secure.  As per usual, the System z platform has the fastest commercial processor, a highly scalable I/O system to handle massive numbers of transactions, ample memory for Blockchain operations and an optimised secure network for optimised Blockchain peer communications.

Returning full circle to where this article started, the System z Mainframe is arguably the de facto System Of Record platform for the worlds traditional Fortune 500 or Global 2000 businesses.  These well established businesses have in all likelihood spent several decades or more establishing this centralised application programming and database usage model.  The realm of opportunity exists to make this priceless data asset available to numerous businesses, both large and small via Blockchain architectures.  If we consider just one simple example, a highly globalised and significant Banking institution could facilitate the creation of a new specialised and optimised “challenger banking” operation, for a particular location or business sector, leveraging from their own internal System Of Record data and perhaps, vital data from another source.  One could have the hypothetical debate as to whether a well-established bank is best placed for such a new offering, but with intelligent collaboration, delivering a valuable service to a new market, where such a service has not been previously possible, doesn’t everybody win?

Perhaps with Blockchain, truly open and collaborative cooperation is possible, both from a business and technology viewpoint.  For example, why wouldn’t one of the new Fortune 500 companies such as a Social Media company with billions of users, look to a traditional Fortune 500 company deploying an IBM System z Mainframe, to expand their revenue portfolio from being advertising driven, to include service provision, whatever that might be.  Rightly or wrongly, if such a Social Media company is a user’s preferred portal for accessing a plethora of other company resources (E.g. Facebook Login), why wouldn’t this user want to fully process some other business transaction (E.g. Financial) via said platform?  However unlikely, maybe Blockchain can truly simplify and expedite Globalisation, for the benefit of users and businesses alike…