System z DevOps & Application Lifecycle Management (ALM) Integration: Evolution or Revolution?

From an IT viewpoint, seemingly the 2010’s decade will be dominated by the digital data explosion, primarily fuelled by Cloud, Mobile and Social Media data sources, while intelligent and timely if not real-time Analytics are required to process this vast and ever-growing data source.  Who could have imagined just a decade ago that the Mobile Phone, specifically Smartphone would be the de facto computing device, although some might say, only for a certain age demographic?  I’m not so sure, I encounter real-life and day-to-day evidence that a Smartphone or tablet can also empower the older generation to simplify their computer usage and access.  From a business perspective, Smartphones have allowed geographically dispersed citizens gain access to Banking facilities for the first time; Cloud allows countless opportunities for data sharing and number crunching for collaborative scientific, health, education and anything else a human being might conceive activities.  The realm of opportunity exists…

When thinking of the bigger picture, we somehow have to find a workable and seamless balance that will integrate the dawn of business computing from the 1960’s to these rapidly moving 21st Century requirements.  When considering which came first, the data or the application, I always think the answer is really simple; the data came first, but I have been wrong before!  What is without doubt, the initial requirement for a business application was to automate data processing and the associated medium-term waterfall (E.g. n-nn Months) application development process is now outdated.  As of 2017, today’s application needs to leverage from this vast and rich digital data source, to identify and leverage new business opportunities, increasingly unplanned and therefore rapid application delivery is required.  For example, previously I wrote about this subject matter in the zAPI: System z Deployment Into The API Economy blog entry.

From an IT perspective, one of the greatest achievements in the 21st Century is collaboration, whether technology based, leveraging from a truly interconnected (E.g. Internet Protocol/IP) heterogeneous computing environment, or personnel based, with IT teams collaborating in a more open and timely manner, primarily via DevOps.  This might be a better chicken and egg analogy; which came first, the data explosion or an IT ecosystem that allowed such a digital data explosion?

There are a plethora of modern-day application development tools that separate the underlying target deployment server from the actual application developer.  Put another way, today’s application developer ideally works from a GUI display via an Eclipse-based Integrated Development Environment (IDE) interface, creating code using rapid and agile development techniques.  From an IBM System z perspective, these platforms include Compuware Topaz Workbench, IBM Developer for z Systems (IDz AKA RDz) and Micro Focus Enterprise Developer, naming but a few.  Therefore when considering the DevOps framework, these excellent Eclipse-based IDE products provide solutions for the Dev part of the equation; but what about the Ops part?

In a collaborative world, where we all work together, from an Application Lifecycle Management (ALM) perspective, IT Operations are a key part of application delivery and management.  Put simply, once code has been created, it needs to be packaged (E.g. Compile, Link-Edit, et al), tested (E.g. Unit, Integration, System, Acceptance, Regression, et al) and implemented in a Production environment.  We now must consider the very important discipline of Source Code Management (SCM), where from a System z Mainframe perspective, common solutions are CA Endevor SCM, Compuware ISPW, IBM SCLM, Micro Focus ChangeMan ZMF, et al.  Once again, from a DevOps perspective, we somehow have to find a workable and seamless balance that will integrate the dawn of business computing from the 1960’s to these rapidly moving 21st Century requirements.  As previously discussed the Dev part of the DevOps framework is well-covered and straightforward, but perhaps the Ops part requires some more considered thought…

Recently Compuware have acquired ISPW (January 2016) to supplement their Topaz Workbench and Micro Focus acquired ChangeMan ZMF (May 2016) to complement their Micro Focus Enterprise Developer solution.  IBM IDz offers out-of-the-box integration for the IBM Rational Team Concert, CA Endevor SCM and IBM SCLM tools.  Clearly there is a significant difference between Source Code Management (SCM) for Distributed Systems when compared with the System z Mainframe, but today’s 21st century business application will inevitably involve interconnected platforms and so a consistent and seamless SCM process is required for accurate and timely application delivery.  In all likelihood a System z Mainframe user has been using their SCM solution for several decades, evolving processes around this solution, which was never designed for Distributed Systems SCM.  Hence the major System z Application Development ISV’s have acquired SCM products to supplement their core capability, but is it really that simple?  The simple answer is no!

Traditionally, for Application Development activities we deployed the Software Development Life Cycle (SDLC), limited to software development phases, including requirements, design, coding, testing, configuration, project management and change management.  Modern software development processes require real-time collaboration, access to centralized data repository, cross-tool and cross-project visibility, proactive project monitoring and reporting, to rapidly develop and deliver quality software.  This requirement is typically classified as Application Lifecycle Management (ALM).

The first iteration of ALM, namely ALM 1.0 was wholly unsuccessful.  Application Development teams were encouraged to consider the value of point solutions for task management, planning testing, requirements, release management, and other functions.  Therefore ALM 1.0 became just a set of tools, where the all too common question for the Application Development team was “what other tool can we use”!

ALM 2.0 or ALM 2.0+ can be considered as Integrated Application Lifecycle Management or Integrated ALM, where all the tools and their users are synchronized with each other throughout the application development stages.  This integration ensures that every team member knows the Who, What, When, and Why of any changes made during the development process, eradicating arduous, repetitive, manual and error prone activities.  The most important lesson for the DevOps team in a customer environment is to concentrate on the human perspective.  They should ask “how do we want our teams to work together and collaborate” as opposed to asking an Application Development ISV team, “what ALM tools do you have”.  Inevitably the focus will be ISV based, as opposed to customer based.  As per the recent Compuware and Micro Focus SCM acquisition history demonstrates, these tools by definition, were never fully integrated from their original inception…

If the customer DevOps teams collaborate and formulate how they want to work together, an ALM evolution can take place in a timely manner, maintaining investment in previous technologies, as and if required.  Conversely, a revolutionary approach is the most likely outcome for the System z Mainframe user, if looking to the ISV for a “turn-key” ALM solution.  By definition, an end-to-end and turn-key ALM solution from one ISV is not possible and in fact, not desirable!  Put another way, as a System z user, do you really want to write off several decades investment in an SCM solution, for another competitive solution, which will still require many other tools to provide the Integrated ALM capability you require?  As always, balance and compromise is the way forward…

If the ubiquitous System z Application Development ISV were to develop their first software product today, it would inevitably be a DevOps and ALM 2.0+ compatible product, allowing for full integration with all other Application Development tools, whether System z Mainframe or Distributed Systems orientated.  Of course that is not the reality.  It seems somewhat disingenuous that the System z Application Development ISV would ask a potential customer to write-off their several decades investment in a SCM technology, when said ISV has just acquired such a technology!  Once again, this is why the customer based Application Development teams must decide how they want to collaborate and what ALM and DevOps tools they want to use.

Seemingly a pragmatic solution is required, hence the ALM 2.0+ initiative.  If an ISV could develop an all-encompassing DevOps and ALM 2.0+ end-to-end Application Development solution for all IT platforms, they would probably become one of the most popular and biggest ISV’s in a short time period.  However, this still overlooks the existing tools that customer IT organizations have used for many years.  Hence, the pragmatic way forward is to build an open DevOps and ALM 2.0+ solution that will integrate with all other Application Development lifecycle tools, whether market place available, or not!  HPE Application Lifecycle Management (ALM) and Quality Centre (QC) is one such approach for Distributed Systems, but what about the System z Mainframe?

IKAN ALM is an ALM 2.0+ and DevOps architected solution that is vendor and platform agnostic.  Put another way, IKAN ALM can operate in single or multiple-vendor mode.  In all likelihood, single vendor mode is unlikely, as there are many efficient Application Development tools in the marketplace.  However, the single most compelling feature of IKAN ALM is its open framework and interoperability with other ALM technologies.  As previously stated, we might consider source code development as the Dev side of the DevOps framework.  IKAN ALM will interface with these technologies, while its core functionality concentrates on the Ops side of the DevOps framework.  Therefore from an Application Lifecycle Management (ALM) viewpoint, the IKAN ALM solution starts where versioning systems end, with an objective of optimizing the entire software engineering process.

IKAN ALM offers a uniquely integrated web-based Application Lifecycle Management platform for both Agile and traditional software development teams.  It combines Continuous Integration and Lifecycle Management, offering a single point of control, delivering support for build and deploy processes, approval processes, release management and software lifecycles.  From a pragmatic and common-sense viewpoint, typically organizations want to continue working with their preferred tools in their preferred environment.  Being ALM 2.0+ compliant, IKAN ALM fully integrates with any versioning tool and any issue tracking tool, providing ALM reports across repositories.  Therefore IKAN ALM offers an evolutionary approach, allowing an organization to leverage from timely ALM benefits, without risk and without the need for displacing any existing technologies.  Over time, should the organization wish to displace older legacy ALM software products, they could so, leveraging from the stand-alone or multiple vendor flexibility of the IKAN ALM solution.

IKAN ALM incorporates ready to use solutions and processes for multiple environments.  These solutions include ALM 2.0+ compliant processes and the necessary scripts to automate the integration with a specific environment, including but not limited to CA Endevor (SCM), CollabNet, HPE ALM/Quality Centre (QC), Oracle Warehouse Builder (OWB), SAP, et al.

The IKAN ALM central server is an open framework web application responsible for User Authentication and Authorization, User Interface Processing, Distributed Version Repository Management and Scheduling Code Builds.  The IKAN ALM agents perform the application build and install functions.

The data repository is an open central database where all administrative data and the audit trail history are stored.  IKAN ALM communicates with the repository using standard JDBC interfaces.  The required JDBC drivers are installed along with the product.  The repository can reside in any RDBMS system, including IBM DB2/UDB, Informix, Microsoft SQL Server, MySQL, Oracle, et al.

Source code is always stored in a Version Control Repository.  IKAN ALM integrates with all the typical versioning systems including Apache Subversion, CVS, Git, Microsoft Visual SourceSafe (VSS), IBM Rational ClearCase (UCM & LT), Serena PVCS Version Manager, et al.  The choice of IDE often drives the choice of the Version Control System (VCS), where organizations can have more than one operational VCS.  IKAN ALM is a solution that provides a unique process control over all versioning systems present in the organization.  IKAN ALM stores each build result within its central server filesystem, labelling the source accordingly in the associated versioning system, guaranteeing a correct source-build relationship.

IKAN ALM safeguards Authentication & Authorization interacting with the organizations security deployment (E.g. Active Directory, LDAP, Kerberos, et al) via the Java Authentication and Authorization Service (JAAS) interface.

IKAN ALM audits any changes (E.g. Who, What, Why, When, Approver, et al), orchestrating the various components and phases of Application Lifecycle Management, delivering an automated workflow to drive a continuous flow of activity throughout the development lifecycle, efficiently coordinating and optimizing application development changes.

In an environment with ever increasing mandatory regulatory compliance requirements, IKAN ALM simplifies the processes for delivering such compliance.  As per the IKAN ALM Build, Deploy, Lifecycle and Approval Management framework, compliance for industry standard regulations (E.g. CMM, ITIL, Sarbanes-Oxley, Six Sigma, et al) is delivered via a reliable, repeatable and auditable process throughout the development life cycle.

Clearly any IT organization can benefit from a fully integrated ALM 2.0+ solution, by enforcing and safeguarding the ALM process is repeatable, reliable and documented.  Regardless of the development team headcount size, ALM releases key people from repetitive and less interesting tasks, allowing them to focus on delivering today’s Analytics based, Cloud, Mobile and Social applications.  A fully integrated ALM 2.0+ solution such as IKAN ALM allows for simplified legacy environment modernization, while simultaneously allowing for experimentation and improvement of all environments alike, both legacy and new.

In conclusion, savvy organizations will safeguard that their Application Development and Operations teams collaborate as per the DevOps framework and decide how they want to implement processes for their environment and more importantly, their business.  This focus should avoid any notion of asking the ubiquitous Application Development ISV, “which tools we should use”!  Similarly, recognizing the integration foundation of ALM 2.0+ will eliminate any notion to displace existing technologies and processes, unless absolutely required.  The need for agile, rapid and quality source code development and delivery is obvious, as is the related solution, which is inevitably pragmatic, evolutionary and multiple vendor tool based.

System z Meets Open Source Linux

Recently IBM launched their LinuxONE offering, packaged in the most powerful and secure enterprise server, namely System z, designed for the new application economy and hybrid cloud era. Although IBM has provided Linux support for the Mainframe server since 2000, this LinuxONE packaging promises a unified portfolio of hardware, software and services solutions for mission-critical Linux applications.

To supplement the existing SUSE and Red Hat support, Ubuntu is included, along with Open Source enablement, including Apache Spark, Chef, Docker, MariaDB, MongoDB, Node.js and PostgreSQL, endeavouring to provide clients with choice and flexibility for hybrid cloud deployments.

From a big picture viewpoint, LinuxONE can be summarised as:

  • Linux Your Way: Choose the Linux environment and tools for your organization
  • Linux Without Limits: Benefit from Enterprise Class Linux support
  • Linux Without Risk: Safeguard business applications with the secure and resilient System z Server

The LinuxONE Systems are classified as Emperor and Rockhopper, loosely classified as High-End and Entry-Level System z servers. LinuxONE Emperor delivers ultimate flexibility, scalability, performance and security trust for mission-critical applications. Scalability is as per the latest z13 server, allowing growth to handle the most demanding workloads. LinuxONE Rockhopper delivers the entry point into the LinuxONE family, offering all the same great capabilities and value, with the flexibility of a smaller package.

LinuxONE includes a choice of hypervisors and management tools, namely KVM for LinuxONE and/or IBM z/VM. This virtualization capability claims support for up to 8000 virtual servers (several thousand containers) in a single System z server footprint, allowing for parallel processing of Test, Development and Production environments. Additionally, new servers and containers can be initialized and running in minutes, with automated resource provisioning and reallocation in seconds.

From a performance viewpoint, System z metrics apply; fast CPU processors, significant I/O capability and 10 TB Memory, all delivering consistent and predictable sub-second response times for thousands of users. A reported capability of 30 Billion RESTful web transaction per day, with ~500,000 database read/write operations per second.

The LinuxONE offering is also a key component of the IBM Cloud, Analytics, Mobile & Security (CAMS) framework:

  • Cloud: An agile and trusted cloud infrastructure to meet new business demands with greater efficiency and lower costs for IT service delivery. Example cloud usage includes Database, Enterprise Systems of Record and Hybrid Platform cloud platforms.
  • Analytics: Flexible, resilient, high performance business and operational analytics for Business Intelligence, Big Data Insights and Operational Analytics for intelligent and continuous business availability.
  • Mobile: Build a premier mobile solution for your business to deliver the best possible experience for your clients, employees and partners alike. Facilitate agile development and deployment of mobile applications, with secure end-to-end mobile transactions, personalized via integrated data analytics.
  • Security: System z has been associated with the highest EAL5+ Common Criteria certification for many years, safeguarding mission-critical data from cradle-to-grave. Security functions such as full data encryption, cryptographic processors and end-to-end security, combined with the unmatched reliability and availability of the System z server, safeguarding mission-critical data and services are fully protected and available.

Finally and a key point, LinuxONE promises TCO optimization with pricing your way. A straightforward menu of pricing options include:

  • A fixed monthly cost usage model for hardware and software resources
  • A per core software pricing model, with 30 days notice for cancellation or resource change
  • A 36 month rental option, with buy/replace/return options at contract end

In theory, LinuxONE could be perceived as just a tweak of existing System z Linux options, including the most recent z13 server, Ubuntu and Open Source support. What has changed are user requirements, the requirement for flexible and agile computing, where Cloud, Analytics, Mobile and Security dominate many CIO agendas.

It is my hope that each and every CIO, System z literate or not, at least considers the LinuxONE platform for their mission-critical enterprise workload, as from a simplistic viewpoint, LinuxONE is just another ubiquitous black server box; or is it…

z13: A Digital Business Ready Solution?

As per the usual next generation zSeries Server release, IBM announced their latest evolution on 13 January 2015, namely the z13. IBM describe this platform as the most powerful and secure system ever built:

  • First system able to process 2.5 billion transactions per day, built for mobile economy
  • Makes possible real-time encryption on all mobile transactions at scale
  • First mainframe system with embedded analytics providing real time transaction insights 17X faster than compared competitive systems at a fraction of the cost

At first glance, feeds and speeds generally don’t enthuse the audience, but if we dig deeper and acknowledge other recent IBM developments incorporating Apple, Twitter and Data Analytics announcements, we perhaps can draw some better business-facing conclusions. IBM have a clearly defined Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative, seemingly based upon the IDC 3rd platform defined as Social, Mobile, Analytics & Cloud (SMAC).

Industry analysts predict that in the next 3 years and by 2017, SMAC (CAMSS) expenditure will account for 25%+ of total enterprise software market revenue, doubling from ~12% in 2012. In simple terms, this new expenditure opportunity represents $100+ Billion revenue. We can imagine that all major ISV’s will be wanting their share of this market…

Whichever classification you choose, IBM CAMSS or IDC SMAC, IT infrastructures and associated investment currently are and certainly will be heavily influenced by this new world computing paradigm. Like it or not, an ability to perform a transaction anywhere (Mobile), keeping everything simple and networked (Social Media), real time prediction of future customer requirements (Analytics), available anywhere (Mobile), for an alleged fraction of the cost (Cloud), makes sense for the 21st Century business. Ignore this new technology evolution at your peril as it will impact each and every area of the IT enterprise and associated resources, primarily software and supporting hardware.

Did you notice the difference between the IBM classification and IDC? IDC have not considered Security to be a consideration factor worthy of acronym (SMAC) inclusion. In today’s world of cybersecurity, that might be somewhat of an oversight, but we must assume that IDC consider cybersecurity to be a consideration for all of the Analytics, Cloud, Mobile & Social aspects, which of course it is!

If we consider the relative merits of technology platforms from a security viewpoint, the IBM z13 delivers EAL5+ security certification, whereas other non-Mainframe platforms can only currently claim EAL4+ certification.

It is estimated that 55%+ of enterprise (mission critical) transactions are processed by the IBM Mainframe, but this is based on pre mobile workloads. It therefore makes commercial sense for IBM to safeguard their flagship platform not only maintains the existing IBM Mainframe customer base, but captures new and mobile centric workloads.

Having considered the business requirements for today’s IT business, let’s now classify the new features of the z13 platform:

  • Up to 40% more total system capacity compared to the zEC12.
  • Up to 10 terabytes (TB) of available Redundant Array of Independent Memory (RAIM) real memory per server.
  • Cryptographic performance improvements with new Crypto Express5S.
  • Economies of scale with simultaneous multithreading delivering more throughput for Linux and zIIP-eligible workloads.
  • Improved performance of complex mathematical models, perfect for analytics processing, with Single Instruction Multiple Data (SIMD).
  • IBM zAware cutting-edge pattern recognition analytics for fast insight into system health extended to Linux on z Systems.
  • A reduction in elapsed time for I/O-bound batch jobs with new FICON Express16S versus FICON Express8S.
  • Support for larger memory configurations planned to be supported on z/OS systems, which can be used to improve transaction response times, lower CPU costs, simplify capacity planning and ease deploying memory-intensive workloads. (The IBM z13 offers up to 10 TB memory.)
  • I/O service time improvement when writing data remotely using the new zHPF Extended Distance II.
  • Support for up to 256 coupling CHPIDs, which provides enhanced connectivity and scalability for a growing number of coupling channel types.
  • IBM Integrated Coupling Adapter (ICA SR), which offers greater short reach coupling connectivity than existing link technologies and enables greater overall coupling connectivity per IBM z13 than prior server generations.
  • Capability to extend z/OS workload management policies into the SAN fabric.
  • New rack-mounted Hardware Management Console (HMC), helping to save space in the data center.
  • Non-raised floor option, offering flexible possibilities for the data center.
  • Optional water cooling, providing the ability to cool systems with user-chilled water.
  • Optional high-voltage dc power, which can help IBM z Systems clients save on their power bills.
  • Optional top exit power and I/O cabling designed to provide increased flexibility.
  • New IBM z BladeCenter Extension (zBX) Model 004 in support of heterogeneous resources managed by IBM z Unified Resource Manager.

As we all know, Moore’s Law had to end sometime soon and this is true for System z CPU chips. The zEC12 CPU was often claimed to be the fastest commercial processor, with a 32nm core and a 5.5 GHz rating. The z13 chip runs a 22 nm core at a 5 GHz, at first glance ~10% slower than the zEC12. The new z13 chip delivers a ~10% performance increase, due to advances in core design, with better branch prediction and pipelining in the core. Noteworthy, is the slightly slower clock speed of the z13 chip, reducing heat output, probably signifying that ~5 GHz is the ceiling for CPU chips in the near future.

However, for z13, the doubling of performance still apples for many other resources:

  • Cryptographic coprocessors performance (~2*)
  • Channel speed (~2*)
  • I/O bandwidth (~2*)
  • Memory/Cache performance (~2*)
  • Memory capacity (~3*)

Once again, classifying these technological advances in terms of mobile business, the z13 delivers real-time encryption of mobile transactions, protecting transaction data, delivering consistent response times for a quality customer experience. Overall, IBM claims the z13 delivers a potential for ~36% better response time, ~61% better throughput and ~17% lower cost per mobile transaction.

A major and subtle change introduced with the z13 is Simultaneous MultiThreading (SMT). SMT allows 2 active instruction streams per core, each dynamically sharing the core’s execution resources. SMT will be available in IBM z13 for workloads running on the Integrated Facility for Linux (IFL) and the IBM z Integrated Information Processor (zIIP).

Each software Operating System/Hypervisor has the ability to intelligently drive SMT in a way that is best for its unique requirements. z/OS SMT management consistently drives the cores to high thread density, in an effort to reduce SMT variability and deliver repeatable performance across varying CPU utilization, thus providing more predictable SMT capacity. z/VM SMT management optimizes throughput by spreading a workload over the available cores until it demands the additional SMT capacity.

From a capacity planning and performance measurement viewpoint, just a slight note of caution. Although the z13 CPU chip delivers increased CPU capacity, the raw speed is slower and there are considerations for SMT. A former IBM staffer, Bob Rogers has written a great article on this SMT subject matter, which should be on your reading list!

In conclusion, the z13 announcement is another step forward for zSeries Mainframes. If you consider this announcement as just another next generation zSeries Mainframe announcement, you’re not treating your business or yourself with the respect they deserve. Instead, please consider this z13 announcement as an evolution from an enterprise solution delivery viewpoint. Primarily, consider the 21st century business keywords, in no particular order, of Analytics, Cloud, Mobile, Social & Security.