Blockchain: A New Application Development Paradigm – What About System z?

Since the inception of Data Processing and the advent of the IBM Mainframe there has been a progressive movement to deliver the de facto “System Of Record (SOR)”, typically classified as a centralised database and related applications.  The key or common denominator for this “Golden Record” is somewhat arbitrary, but more often than not, for most businesses, it will be customer or product identity related.  The benefit of identifying and establishing an SOR is the reuse of this data, for a multitude of different business usage scenarios.

From an application programming viewpoint, historically there was a structured approach when delivering new business function, whether with bespoke programs or Commercial Off the Shelf (COTS) software packages.  More recently data analytics has accelerated this approach, where new business opportunities can be identified from data trends, with near real-time processing, while DevOps frameworks allow for rapid application delivery and implementation.  However, what if there was a new approach with a different type of database and as a consequence, a new approach to application programming?

From a simplistic viewpoint, Blockchain architecture is analogous to traditional database processing, whereas the interaction with said Blockchain database is vastly different, changing from a centralised to decentralised focus.  Therefore for application developers, Blockchain is a paradigm shifting architecture, in how software applications will be architected and coded.  Recognition of this new and rapidly emerging computing paradigm is of vital importance, because it’s the cornerstone for the creation of decentralised applications, a logical and natural evolution from distributed computing architectural constructs.

If we take some time to step back from the Information Technology world and consider the possibilities when comparing a centralised versus decentralised approach, the realm of possibility exists for a truly global interconnectivity approach, which isn’t limited to a specific discrete focus (E.g. Governance, Market, Business Sector, et al).  In theory, decentralised applications might deliver a dynamic and highly collaborative business approach…

A Blockchain is a pseudo linear container space (block) to store data for “controlled public usage”.  In theory, with the right credentials, this data can be accessed by any user!  The Blockchain container is secured with the originators key, so only the key holder or authorised program can unlock the container data.  This is the fundamental difference between a database and a Blockchain.  For a Blockchain, the header record can be considered “eligible for Public usage”.

The data stored within a Blockchain might be considered as a “token”, the most obvious implementation being Bitcoin.  Generically, Blockchain might be considered as an alternative and flexible data transfer system that no private or public authority and especially a malicious third party can tamper with, because of the encryption process.  Put really simply, the data header has “Public” visibility, but data access requires “Private” authenticated access.

From a high-level viewpoint, Blockchain can be considered as an architectural approach, connecting an infinite a number of peer computers, collaborating with a generic process for releasing or recording data, based upon cryptographic transactions.

One must draw one’s own conclusions as to whether this Centralised to Distributed to Decentralised data and application programming approach is the way forward for their business.

Decentralised Consensus is the inverse of a centralised approach where one central database was accessed to validate transaction processing.  A decentralised scheme transfers authority and trust to a decentralised virtual network, enabling processing nodes to continuously access or record transactions within a public block, creating a unique chain for modification operations, hence the Blockchain terminology.  Each successive data block contains a unique fingerprint (hash) of the previous code.  The basic premise of cryptographic processing applies, where hash codes are used to secure transaction origination authentication, eliminating the requirement for centralised processing. Duplicate transaction processing is eliminated because of Blockchain and associated cryptographic processing.

This separation of consensus (data access) from the actual application itself is the fundamental building block for a decentralised application programming approach.

Smart Contracts are the building blocks for decentralised applications.  A smart contract is a small self-contained program that you entrust with a value unit (token) and associated rules.  The simple philosophy of a smart contract is to programmatically facilitate transactional contractual governance between two or more parties via the Blockchain.  This eliminates the requirement of an arbitrary 3rd party authority for governance, when two or more parties can agree exchange between themselves.  Even today, this type of approach is not unusual between organizations, typically based upon a data (file) interchange standard (E.g. Banking).

Put simply, smart contracts eliminate the requirements of 3rd party intermediaries for transaction processing.  Ideally, the collaborating parties define and agree the required policy, embedded inside the business transaction, enabling a self-managed process between nodes (computers) that represent the reciprocal interests of the associated users and owners.

Trusted Computing combines the architectural foundations of Blockchain, decentralised consensus and smart contracts, enabling the spread of resources and transactions with a trusted “peer-to-peer” relationship, in theory enabling trust between numerous nodes (computers).

Previously institutions and central organizations were necessary as trusted authorities.  Deploying a Blockchain approach, these historical centralised central functions can be simplified via smart contracts, governed by decentralised consensus within a Blockchain.

Proof of Work is an important concept to identify the unequivocal authenticator of transactions, allowing the authorised access to participate in the Blockchain system.  Proof of work is a fundamental building block because once created, it cannot be modified, being secured by cryptographic hashes that ensure its authenticity.  Usability challenges ensue, preventing users from changing Blockchain records, without reprocessing the “proof of work”.

It therefore follows, proof of work will be expensive to maintain, with likely future scalability and security issues, depending on the data user (miner) requirements and incentives, which in all likelihood, will reduce over time.  As we all know, most data access is high when data has been recently created, rapidly decreasing to low or even null after a limited period of time.

Proof of Stake is a more elegant and alternative approach, determining which user can update the consensus, while preventing unwanted forking of the underlying Blockchain, being a more cost efficient approach, while being more difficult and expensive to compromise.

Once again, if we consider the benefits of Blockchain from a business processing viewpoint, there is a clear and present opportunity to eliminate manual or semi-automated processes, both internal and external to the business.  This could expedite the completion of processes that previously required days or even weeks to complete and the potential for human error.  A simple example might be a car purchase, based upon 3rd party finance.  Such a process typically includes 3rd party data requirements, for vehicle provenance, credit scoring, identity proof, et al.  If the business world looks at the big picture, they can simplify and automate their processes, by collaborating with existing and more likely, yet to be identified partners.  The benefits are patently obvious…

From a System z viewpoint, recent technological developments leverage from existing IBM resources, including the LinuxONE, Bluemix and Watson offerings:

  • LinuxONE: The System z and LinuxONE platforms are best placed to drive Blockchain innovation, arguably via the Open Mainframe and Hyperedger IBM supports testing and development of the open Blockchain fabric code for developers on their LinuxONE Community Cloud.
  • Bluemix: the IBM Blockchain services available on Bluemix, developers can access fully integrated DevOps tools for creating, deploying, running and monitoring Blockchain applications on the IBM Cloud.
  • Watson: Leveraging from the Watson IoT Platform, IBM will enable information from devices such as RFID-based locations, barcode-scan events or device-reported data, to be used within the IBM Blockchain. Devices will be able to communicate to Blockchain based ledgers to update or validate smart contracts.

From a business benefits viewpoint, the IBM System z platform is ideally placed for Blockchain deployment, being a highly secure EAL5+ certified platform.  Hardware accelerators deliver high speed secure encryption and hashing, supplemented by tamper-proof security Crypto Express modules for key management.  Numerous memory resident partitions can also be created rapidly to keep ledgers separate and secure.  As per usual, the System z platform has the fastest commercial processor, a highly scalable I/O system to handle massive numbers of transactions, ample memory for Blockchain operations and an optimised secure network for optimised Blockchain peer communications.

Returning full circle to where this article started, the System z Mainframe is arguably the de facto System Of Record platform for the worlds traditional Fortune 500 or Global 2000 businesses.  These well established businesses have in all likelihood spent several decades or more establishing this centralised application programming and database usage model.  The realm of opportunity exists to make this priceless data asset available to numerous businesses, both large and small via Blockchain architectures.  If we consider just one simple example, a highly globalised and significant Banking institution could facilitate the creation of a new specialised and optimised “challenger banking” operation, for a particular location or business sector, leveraging from their own internal System Of Record data and perhaps, vital data from another source.  One could have the hypothetical debate as to whether a well-established bank is best placed for such a new offering, but with intelligent collaboration, delivering a valuable service to a new market, where such a service has not been previously possible, doesn’t everybody win?

Perhaps with Blockchain, truly open and collaborative cooperation is possible, both from a business and technology viewpoint.  For example, why wouldn’t one of the new Fortune 500 companies such as a Social Media company with billions of users, look to a traditional Fortune 500 company deploying an IBM System z Mainframe, to expand their revenue portfolio from being advertising driven, to include service provision, whatever that might be.  Rightly or wrongly, if such a Social Media company is a user’s preferred portal for accessing a plethora of other company resources (E.g. Facebook Login), why wouldn’t this user want to fully process some other business transaction (E.g. Financial) via said platform?  However unlikely, maybe Blockchain can truly simplify and expedite Globalisation, for the benefit of users and businesses alike…

System z: Optimizing DASD I/O Subsystem Performance

Historically there was a very simple synergy between the IBM S/370 Mainframe and its supporting disk I/O (DASD) subsystem, allowing for Mainframe host to physical and logical disk device (I.E. 3390) connectivity. The analysis and tuning of this I/O subsystem has always been and continues to be supported by the SMF Type 7n records via IBM RMF and the BMC CMF alternative. However, over the years, major advances in DASD subsystems and the System z Mainframe server have delivered many layers of technology resources (E.g. Cache, Memory, FICON Channels, RAID Storage, Proprietary Microcode, et al) and this has introduced complexities into highlighting DASD I/O subsystem performance problems.

The focus of technology based metrics (E.g. I/O Rate Response Time, I/O MB/S Bandwidth, et al) have also been complemented with more meaningful business focussed Service Level Agreements (SLA). Therefore today’s System z I/O Performance Analyst must gather and act upon proactive meaningful information from the ever-increasing amounts of performance data available. Put another way, too much data can deliver not enough information! As previously stated, it was forever thus, RMF and CMF have always collected the requisite performance data available and arguably no other data source is required (E.g. OMEGAMON/TMON/SYSVIEW Performance Monitor, SAS/MXG/MICS/WPS Performance Database). RMF/CMF is the ideal data source for thorough and timely System z I/O performance management, where intelligent analytics and expert knowledge are required to present this “Golden Record”.

However, today’s System z Support Teams need simple and timely presentation of the data, highlighting potential challenges, graphically presented for their Management, allowing for simple tracking of SLA agreements and technology changes (I.E. Software/Hardware Upgrades).

Additionally, Workload Manager (WLM) can control non-paging queued DASD I/O requests, based upon device busy conditional processing. Therefore the z/OS system can manage I/O priorities in a Sysplex, based on WLM service class goals. WLM dynamically adjusts the I/O priority based on service class goal performance and whether a DASD device can influence the overall performance objectives. For obvious reasons, this WLM function does not micro-manage I/O priorities, only changing a service class period’s I/O priority infrequently. WLM is deployed by many System z users to assist in the automated management of system resources (E.g. CPU, Memory, I/O, et al), based upon Service Level goals.

From a DASD subsystem technology viewpoint, there is no longer an obvious one-one direct connection between the Mainframe host and DASD device. An increasing number of technological advances, both microcode and hardware (E.g. Memory, Fibre Channel, Function Assist Processing, et al) have diminished the requirement for data access directly from the physical device. Put another way, in today’s world of System z servers with multiple cache level CPU chips (I.E. Relative Nest Intensity), massive and multiple processor memory resources (I.E. z13 @ 10 TB Memory), high bandwidth Fibre Channel (I.E. FICON, zHPF) subsystem and a hierarchy of DASD memory (I.E. SSD/Flash, Cache), it’s not uncommon to consider an I/O that requires physical device access as a problem! Finally and most importantly, from a DASD subsystem viewpoint, each of the recognized System z DASD providers, EMC (Symmetrix VMAX), HDS (VSP G1000) and IBM (DS8870) have highly proprietary DASD subsystems that provide z/OS plug compatibility, but deliver overall I/O performance using their own unique architecture and internal algorithms.

Of course, an over configured hardware environment will deliver a poor TCO, while an under configured environment will manifest in SLA issues and bad user experiences, where the middle-ground always delivers the optimal environment. Resource optimization always demands proactive day-to-day management, from an internal and indeed external communication viewpoint. With the highly proprietary design features of the IHV DASD subsystems, whether EMC, HDS or IBM, having the right information and identifying the precise problem, simplifies the communication process with the IHV. Such communication might highlight a resource under provision (E.g. Memory Capacity), a subsystem setting tweak requirement, either host or subsystem based, or indeed a hardware failure. In today’s world, these issues need to be fixed in minutes or hours, not days or weeks.

Therefore, where does today’s System z I/O Performance Analyst start to collect the required information to safeguard that their DASD subsystem is optimized, both from a capacity and performance viewpoint?

A simplistic viewpoint of an I/O health-check should consider the following:

  • Service Level Agreements (SLA): Are overall objectives being delivered or missed?
  • User Experience: Are users (customers) complaining of poor service or response times?
  • I/O Metric Performance: Are there obvious signs of abnormal performance statistics?

Several decades ago, an overall I/O health check might have been a periodic (E.g. Weekly or longer) activity, whereas today it’s undoubtedly a Business As Usual (BAU) and 24*7 activity. Therefore a fully automated solution is required, built upon the tried and tested System z performance fundamentals, namely RMF or CMF. The ideal solution will perform analytics based data reduction, presenting the right information, at the right time, allowing for intelligent business based communication, both internally, to customers and end users from an SLA viewpoint, and externally, with IHV DASD suppliers, safeguarding optimal performance and TCO.

EADM (Easy Analyze DASD Mainframe) is a solution from Technical Storage that performs automated performance analysis of the z/OS I/O subsystem, delivering predictive analytics for better storage capacity planning and performance measurement. The Technical Storage EADM architects have in excess of 40 years IBM Mainframe experience, specializing in the I/O subsystem, and so it’s no surprise that EADM delivers expert and timely knowledge via an easy-to-use solution.

EADM is an easy-to-install and easy-to-use plug-and-play solution that has no proprietary considerations, requiring no additional System z resource (E.g. CPU, Memory, DASD, et al) requirements. Installed on Microsoft server platforms, EADM is easily virtualized via VMware, Hyper-V, et al, requiring no target database for performance data storage. EADM performs a daily health check of the entire System z disk subsystem. EADM works around the clock, delivering customized and automatic user friendly GUI type reports. For today’s System z technician, the open and IP architecture base of EADM allows for secure remote access via Mobile, Tablet or Laptop devices, as and when required.

Operations and performance teams are alerted as soon as performance variances occur, typically in minutes, assisting in the identification of underlying root problems, causing changes in system behaviour. Incorporating intelligent and meaningful I/O performance indicators, with drill-down and zoom-in ability, storage technicians can determine if the problem is temporary, permanent, local or global. By simplifying the data reduction process (E.g. RMF/CMF data from numerous LPAR/Sysplex environments), EADM safeguards that the internal technical team can efficiently manage their ever increasingly complex and large DASD environment, for intelligent and timely communications with internal business teams and external suppliers alike.

EADM simplifies the System z I/O subsystem capacity and performance management process, delivering expert reports and timely historical analysis, for example:

  • Automatic daily (24 Hour) analysis of Sysplex wide workload (On-Line TP & Batch) I/O response times
  • Systematic intelligent alerts of early performance variances with exact occurrence time indicators
  • Identification of I/O performance hot-spots with DASD volume and data set level granularity
  • Performance trending at DFSMS Storage Group, Subsystem LCU and DASD volume level
  • DR (E.g. PPRC) simulations to prevent data loss and forecast Data Centre failover scenarios
  • I/O subsystem WLM indicators to determine exactly what impacts performance objectives
  • Full FICON channels and zHPF analysis, incorporating typical I/O throughput indicators
  • HyperPAV and associated LCU indicators to easily balance volumes, optimizing PAV alias allocation
  • Performance monitoring and balancing via intelligent LCU, SSID and I/O analytics
  • DASD capacity usage via DCOLLECT data, comparing assigned vs. allocated vs. actual disk utilization
  • EADM supports entry-level several LPAR and complex multiple CPC/LPAR System z configurations

A well provisioned and performing System z I/O subsystem is of vital importance for safeguarding today’s ever increasing storage requirements of mission critical business applications. A poorly performing I/O subsystem will generate unnecessary and extra CPU overhead, with potential and tangible TCO impact, in conjunction with potential business impact. Although the advances of the System z server and underlying DASD I/O subsystem can compensate for many application code or data placement issues, the fundamental concepts of analysing and tuning the I/O subsystem remain.

Therefore the savvy and proactive System z customer will safeguard that they find a solution to deliver optimal DASD I/O performance. Without doubt, such an analysis could be performed by a highly-skilled individual, but today’s 21st Century world demands a hybrid of technical and commercial skills. Therefore a solution that incorporates the diagnostic knowledge of the most highly trained technician, performs intelligent analytics on a plethora of Sysplex wide performance data sources and presents the information required, is one that will deliver benefit each and every day. EADM is an example of such a solution, delivering demonstrable System z TCO optimization benefits, while safeguarding a short-term ROI, with simple deployment and resource utilization attributes.