CICS: The Best Enterprise Transaction Server & So Much More…

In one form or another, CICS has been available since the mid-1960’s, just about as long as the IBM Mainframe server that recently celebrated its 50th anniversary, released in 1964.  From a deployment viewpoint, 90%+ of Fortune 500 companies deploy CICS, primarily for its robust and often unbeatable ability to deliver sub-second response times for numerous application transactions.  Whether a large or small IBM Mainframe user, CICS delivers an enterprise class solution for a myriad of business types and arguably at one time or another, nearly every committed IBM Mainframe installation has implemented CICS.

In the past few decades I have encountered many failed IBM Mainframe migration projects and more often than not, the primary reason for platform migration failure was the inability of the target platform to deliver consistent sub-second transaction response times for a plethora of mission critical business applications.  Similarly, it often follows that if a non-Mainframe environment has been heavily configured to handle the CICS transaction workload, it often fails with the subsequent batch processing, suffering from significantly elongated elapsed times.

CICS has its foundations as an enterprise class transaction server, capturing data input for subsequent storage and retrieval in Database Management Subsystems, but let’s not forget, CICS can do so much more…

Let’s not forget, in 2001 CICS Transaction Server (TS) 2.1 for z/OS introduced the foundation for web services support, and a capability for CICS transactions to be invoked via HTTP requests.  There have been numerous enhancements since, too many to mention, which have evolved CICS into a fully-rounded family of solutions, allowing for cradle to grave application design and delivery.  However, let’s just take some time to review what CICS TS Version 5 has delivered, and how this might benefit the 21st Century business.

Recognizing the IBM defined Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative, CICS is integral to such a business facing initiative, primarily from a Cloud interoperability viewpoint.

The CICS V5.2 Application Server has a capability to host multiple applications and multiple versions of the same application, simultaneously, primarily due to the substantial increase of platform scalability.  Similarly, a heterogeneous code environment offers application development personnel a single environment to work seamlessly with Java and other legacy languages, such as COBOL, C/C++, PL/I, et al.

Combining this heterogeneous code environment with Cloud enablement allows for new application version deployment, without a requirement to disable or remove the previous version from Production processing.  Regardless of the underlying application source code base, end users can access an application without service interruption, as they transition to the new application version.  Similarly, user workloads can be seamlessly redirected to a previous working version of application code, presuming new version errors.

The CICS V5.2 application server delivers a powerful hosting environment for all business applications, new or old.  Application Development teams can provision applications for design and testing within a “real life working” infrastructure, removing the application when testing is complete, or promoting said application ASAP, for mission critical business usage.  This delivers a significant business improvement in application availability, minimizing service downtime.  Therefore, applications stay as current and relevant as possible, reducing the risk of business service outages, delivering better and consistent end user experiences.

As per the IBM zSeries Mainframe heritage, a standard resilience feature of the CICS V5.2 Application Server is an inherent capability to perform, even in the event of a problem scenario.  Cloud enablement delivers a clustering capability, which handles both system and application level failure scenarios.  Seamless and timely problem resolution dictates less down time, delivering more business availability, instilling a high sense of confidence in end users and consumers alike.

Noting the Security aspect of the IBM CAMSS initiative and the ever present cybersecurity risk to us all, CICS Application Server also delivers on this front.  Safeguarding that application enhancements can be brought online ASAP and securely, CICS V5.2 Application Server seamlessly integrates with various security software and languages, including the latest WebSphere Application Server (WAS) Liberty Profile, allowing for the portability of Java Enterprise Edition Web applications.  Enhanced security capabilities also include Java Naming and Directory Interface (JNDI), Bean Validation, JDBC Type 2 Data Sources and the Java Transaction API (JTAPI).  SSL support incorporated within the Liberty JVM server HTTP listener is extended to support key certificates stored in System Authorization Facility (SAF) key rings, delivering SSL server authentication.

Like it or not, Cloud computing is a rapidly evolving technology, where the Cloud is integrating increasingly more applications and associated services, delivering cost savings and scalability accordingly.  Of course, for true enterprise class scalability and cost efficiency, arguably the IBM zSeries Mainframe is an ideal platform for Cloud technologies.  Therefore with some slightly modified thinking, organizations can deploy Cloud based solutions and benefit from application promotion benefits, especially with a technology such as CICS.

There is a great presentation named Five Compelling Reasons for Creating a CICS Cloud that provides robust working examples of how to increase application availability, with real-life application development scenarios.

In conclusion, CICS continues to evolve and not only is it the best Enterprise Class Transaction Server, the family of CICS products including its Application Server deliver a 21st Century Cloud computing compatible platform, for the most demanding of business requirements.  Whether considering the IBM defined Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative or the more traditional Reliability, Availability and Serviceability (RAS) attributes of the zSeries Mainframe server, the latest version of CICS facilitates:

  • Rapid Application Development: Agile methodologies for rapid development, irrespective of programming language (E.g. Java, COBOL, C/C++, PL/I, et al).
  • Seamless & Timely Application Deployment: More frequent application updates, minimizing downtime and associated cost, while leveraging from Cloud functionality, to deploy new applications, application enhancements or bug fixes, side-by-side with existing real-life Production workloads.

Data Entry – Is Windows XP & Office 2003 End Of Support An Issue?

Recently somebody called me to say “do you realize your Assembler (ASM) programs are still running, some 25 years after you implemented them”?  Ouch, the problem with leaving comments and an audit trail, even in 1989!  It was a blast-from-the-past and a welcome acknowledgement, even though secretly, I can’t really remember the code.  We then got talking about how Mainframe programs can stand the test of time, through umpteen iterations of Operating System.  This article will consider whether you need a Mainframe to write application code that will stand the test of time.

Spoiler alert: No you don’t; nowadays a good application development environment, a competent software coder and most importantly of all, common sense, can achieve this, for Mainframe and Distributed Systems alike.  However, you might need to recompile the source code from time-to-time…

An aging industry report from Gartner Research revealed that “many Independent Software Vendors (ISVs) are unlikely to support new versions of applications on Windows XP in 2011; in 2012, it will become common.”  And it may stifle access to hardware innovation: Gartner Research further states that in 2012, “most PC hardware manufacturers will stop supporting Windows XP on the majority of their new PC models.

After several years of uncertainty, Microsoft have officially announced that support for Windows XP (SP3) & Office 2003 ends as of 8 April 2014.  Specifically, there will be no new security updates, non-security hotfixes, free or paid assisted support options or online technical content updates.  Furthermore, Microsoft state:

Running Windows XP SP3 and Office 2003 in your environment after their end of support date may expose your company to potential risks, such as:

  • Security & Compliance Risks: Unsupported and unpatched environments are vulnerable to security risks. This may result in an officially recognized control failure by an internal or external audit body, leading to suspension of certifications, and/or public notification of the organization’s inability to maintain its systems and customer information.
  • Lack of Independent Software Vendor (ISV) & Hardware Manufacturers Support:  A recent industry report from Gartner Research suggests “many independent software vendors (ISVs) are unlikely to support new versions of applications on Windows XP in 2011; in 2012, it will become common.” And it may stifle access to hardware innovation: Gartner Research further notes that in 2012, most PC hardware manufacturers will stop supporting Windows XP on the majority of their new PC models.

Looking at the big picture, anybody currently deploying Windows XP might want to consider the lifecycle of other Microsoft Operating System versions, for example, Windows Vista, Windows 7 & Windows 8.  As the Microsoft Windows Lifecycle Fact Sheet states, mainstream support for Windows 7 ends in January 2015, less than one year from now, and so arguably the only viable option is Windows 8.  The jump from Windows XP to Windows 8 is massive, not necessarily in terms of usability, but certainly and undoubtedly in terms of compatibility.

Those of us that experienced the Windows Vista, Windows 7 and more latterly Windows 8 upgrades, know from experience that each of these upgrades had interoperability challenges, whether hardware (I.E. Printers, Scanners, Removable Storage, et al), software (I.E. Bespoke, COTS, Utilities, et al) or even web browser (I.E. Internet Explorer, Firefox, Chrome, Safari, et al) related.  Although many of these IT resources might be considered standalone or technology commodities, where a technology refresh is straightforward and an operational benefit, the impact on the business for user facing applications might be considerable.  Of course, the most pervasive business application for capturing and processing customer information is typically classified as data entry related…

So, why might a business still be deploying Windows XP or Office 2003 today?  One typical reason relates to data entry systems, either in-house written or packaged in a Commercial Off the Shelf (COTS) software product.  In all likelihood, one way or another, these deployments have become unsupported from a 3rd party viewpoint, either because of the Microsoft software support ethos or the COTS ISV support policy.

Looking back to when Microsoft XP was first released, it offered an environment that allowed customers to think outside of the box for alternatives to traditional development methods, or put another way, Rapid Application Development (RAD) techniques.  Such a capability dictated that businesses could deploy their own bespoke or packaged systems for capturing data and thus automating the entirety of their business processes from cradle to grave with IT systems.  For a Small to Medium sized Enterprise (SME), this was a significant benefit, allowing them to compete, or at least enter their market place, without deploying a significant IT support infrastructure.

Therefore RAD and Microsoft Software Development Kit (SDK) techniques for GUI (E.g. .NET, Visual, et al) presentation, sometimes and more latterly browser based, were supplemented with structured data processing routines, vis-à-vis spreadsheet (CSV), database (SQL) and latterly more formalized data structure layouts (I.E. XML, XHMTL).  Let’s not forget, Excel 2003 and Access 2003 that offered powerful respective spreadsheet and database solutions, which could capture data, however crude that implementation might have been, while processing this data and delivering reports with a modicum of in-built high-level code.

However, as technology evolves, sometimes applications need to be revisited to support the latest and greatest techniques, and perhaps the SME that embraced this brave new world of RAD techniques, were left somewhat isolated, for whatever reasons; maybe business related, whether economic related (E.g. dot com or financial markets) or not.

Let’s not judge those business folks still running Windows XP or Microsoft Office 2003 today; there are probably many good reasons as to why.  When they developed their business systems using a Windows XP or Office 2003 software base, I don’t think they envisaged that the next Microsoft Operating system release might eradicate their original application development investments; requiring a significant investment to upgrade their infrastructure for subsequent Windows versions, but more notably, for interoperability resources (I.E. Web Browsers, .NET, Excel, Access, ODBC, et al).

So if you’re a business running Windows XP and maybe Office 2003 today, potentially PC (E.g. Desktop, Laptop) upgrade challenges can be separated into two distinct entities; firstly the hardware platform and operating system itself; where the “standard image” approach can simplify matters; and secondly, the business application, typically data entry and processing related.  Let’s not forget, those supported COTS software products, whether system utility (E.g. Security, Backup/Recovery/Archive, File Management, et al) or function (E.g. Accounting, ERP, SCM, et al) can be easily upgraded.  It’s just those bespoke in-house systems or unsupported systems that require a modicum of thought and effort…

We all know from our life experiences, if we only have lemons, let’s make lemonade!  It’s not that long ago that we faced the so-called Millennium Bug (Year 2000/Y2K) challenge.  So that could either be a problem or an opportunity.  The enlightened business faced up to the Year 2000 challenge, arguably overblown by media scare stories, and upgraded their IT infrastructures and systems, and perhaps for the first time, at least made an accurate inventory of their IT equipment.  So can similar attributes be applied to this Windows XP and Office 2003 challenge?

The first lesson is acceptance; so yes we have a challenge and we need to do something about it.  Even if your business has been running Windows XP or Office 2003, in an extended support mode for many years, in all likelihood, the associated business systems are no longer fit-for-purpose or could benefit from a significant face-lift to incorporate new logic and function that the business requires!

The second lesson is technology evolution; so just as RAD and SDK were the application development buzzwords of the Windows XP launch timeframe, today the term studio or application studio applies.  An application studio provides a complete, integrated and end-to-end package for the creation, including the design, test, debug and publishing activities of your business applications.  Furthermore, in the last decade or so, there has been a proliferation of modern language (E.g. XHTML, Java, C, C++, et al) programmers, whether formalized as IT professionals, or not (E.g. home coders).

The third lesson is as always, cost versus benefit; the option of paying for Windows XP or Office 2003 extended support ends as of April 2014.  So what is the cost of doing nothing?  As always, cost is never the issue, benefit is.  Investing in new systems that are fit-for-purpose, will of course deliver business benefit, and if the investment doesn’t pay for itself in Year 1, hopefully your business can build a several year business case to deliver the requisite ROI.

Finally, is remote data entry possible with a Windows XP based system?  Perhaps, but certainly not for each and every modern day device (E.g. Smartphone, Tablet, et al).  Therefore enhancing your data entry systems, with the latest presentation techniques, might deliver significant benefit, both for the business and its employees.  Remote working, whether field or home based, delivers productivity benefits, where such benefits can be measured in both business administration cost reduction and increased employee job satisfaction and associated working conditions.

So how easy can it be to replace an aging Windows XP and/or Office 2003 application?

Entrypoint is a complete application development package for creating high-performance data entry applications.  Entrypoint software is built around a scalable, client-server architecture that interfaces with SQL databases for data storage.  Entrypoint data entry software interfaces with standard communications products and commercial networks.

Entrypoint is a web based data entry system that includes Application Studio, a local development tool that allows the user to easily create any data entry system, based upon their specific and typically unique business requirements.  The Entrypoint thin and thick clients let the user enter their data either directly via web resources or via a local workstation (E.g. PC), as per their requirements, while being connected to the same database.

Entrypoint Benefits: Today’s 21st Century business is focussed on delivering tangible business benefit and cost efficient customer facing solutions that can be rapidly deployed, while being secure and compliant:

  • Flexible Data Entry: Whether via Intelligent Data Capture (IDC) and/or Electronic Data Capture (EDC), Entrypoint can accommodate any business requirement, either from scratch, or perhaps via conversion from a legacy platform (E.g. DOS).
  • Rapid Application Deployment: Entrypoint can be deployed in hours, sometimes and typically by non-application development personnel, safeguarding long-term management and associated TCO concerns.
  • Audit: The Entrypoint Audit Trail Facility (ATF) tracks all changes made to records from the time they are first entered into the case report form throughout all editing activity, regardless of the number of users working on them.  The audit facility can be enabled on an application-by-application basis for all users, groups of users or individual users.
  • Security: Entrypoint includes a variety of features that yield the highest levels of critical security required for Clinical Trials.  Its inbuilt security features let you create a customized and granular security policy specific to your needs.  Entrypoint uses ODBC to connect to SQL databases for data storage, which provides an additional level of security; database logins, passwords and even built-in encryption, not always available for other data entry solutions.  Optional 128-bit encryption protects all messages sent to or from the server delivering significantly greater protection, not always available for other data entry solutions.

Entrypoint is one of the simplest but most comprehensive data entry solutions that I have encountered and provides a cost-efficient solution for both the smallest and largest of businesses.  Furthermore, in all likelihood, and definitely in real-life, an entry-level employee or graduate with programming skills could rapidly develop a Data Entry system with Entrypoint to replace any existing Windows XP (or any other Windows OS) based solution.  This observation alone dictates that somebody who actually works for the business, not a 3rd party IT professional, can not only perform the technical work required, but more importantly, be a company employee that can easily relate to and sometimes learn about the end-to-end business.

In the IT world, change is inevitable, and sometimes change is forced upon us.  Whatever your thoughts regarding end of support for Windows XP and Office 2003, there are options for you and your business to embrace this change, move forward, and improve your processes.  You no longer have the option to pay Microsoft for extended support, and so why not use these money’s and invest in a system that can be easily supported, and easily adapted in the future, to provide long-term benefit for your business!

Application Performance Tuning – Why Bother?

With older generations of Mainframe Operating Systems, certainly MVS/XA and perhaps MVS/ESA, application performance tuning was a necessity, not an afterthought.  Quite simply, the cost of Mainframe resources, namely CPU, memory and disk, dictated that your mission critical business application might not perform to business requirements, unless you tuned your programming code.  Programmers, both of the system and application variety understood the bits and bytes of available programming languages (E.g. ASM, COBOL, PL/I) and Operating System (I.E. MVS), collaborating either via proactive process, or reactive problem solving.  With the continuing reduction of IT hardware component costs, the improvement in Operating Systems (E.g. 64-bit architecture) and newer programming languages (E.g. C, C++), it seems that application performing tuning is somewhat of an afterthought, but at what cost?

We all know that the cost of a Mainframe MIPS is significant, and although it might have reduced dramatically from a hardware viewpoint, from a software viewpoint, the cost remains largely static at ~£1,500-£3,500, per year, depending on your configuration.  So if your applications are burning several hundred if not several thousand extra MIPS unnecessarily, that’s very expensive indeed!  Additionally and just as importantly, a badly tuned system will manifest itself in slower transaction response times and longer batch jobs, if applicable, which could impact service availability.  So why is there a seeming reluctance to tune business applications, Mainframe resident or not?

If ever there was a functional IT area where the skills gap has never been wider, then application performance tuning is said skill, when comparing the salty old sea dog Mainframe dinosaur, with the newer Mainframe technician!

From an application development process viewpoint, where does the application performance tuning task live; before or after implementation?  The cynical amongst us will know; if it’s after implementation, there’s a strong likelihood said activity will never be performed!  If it’s before implementation, how many projects incorporate a meaningful stress test, or measure transaction response times versus an SLA or KPI metric?  Additionally, if the project is high-priority and/or running behind schedule, then performance testing is an activity that is easily removed…

Back in the good old days, the late 1980’s to early 1990’s, some application performance tuning tools did start to emerge, most notably Strobe.  Strobe was useful to even the most accomplished of system and application programmer personnel, and invaluable to less experienced personnel, and so arguably Strobe became the de facto software tool for tuning Mainframe applications.  However, later releases of MVS (E.g. OS/390 and z/OS), the non-event that was the Year 2000 (Y2K), seemed to remove the focus on and importance of application tuning.

Arguably most importantly of all, that software MIPS cost item, where Strobe and its competitors (E.g. ASG/BMC TriTune, CA Application Tuner, IBM APA, Macro4 ExpeTune, et al) will utilize even more CPU to capture diagnostic trace information, contributed to the demise of application performance tuning.  However, those companies that have undertaken such application tuning activities in the last decade or so are sitting pretty, having reduced the CPU (MIPS) resource consumed, lowering TCO and optimizing performance accordingly.  In the 21st Century, these software solutions are classified as Application Performance Management (APM) solutions.

Is there a better and easier way to stimulate an interest in the application performance tuning discipline?  If the desire exists to tune an application, lowering CPU MIPS usage, optimizing service performance, then the traditional tools and methods mentioned previously exist, but perhaps a new (or not so new) CPU performance data source exists…

With the introduction of the z10 server, a new function CPU MF (CPU Measurement Facility) was incorporated.  Let’s not forget, z10 is now an n-2 technology, having been superseded by the z196/z114 and the latest zBC12/zEC12 generation of servers.  So each and every committed Mainframe customer should be positioned to benefit from the CPU MF function.

CPU MF provides optional hardware assisted collections of information about logical CPU activity executed over a specified interval in selected Logical Partitions (LPARs).  The CPU MF counters function is intended to be run on a constant basis to collect long-term performance data (I.E. SMF Record 113), in a similar manner to how you collect other performance data.  I have previously briefly discussed how CPU MF SMF data can be used to increase Mainframe Server Capacity Planning efficiencies. 

The CPU MF sampling function is a short duration, precise function that identifies where CPU resources are being used, to help you improve application efficiency.  Put very simply, CPU MF sampling data has minimal CPU overhead (E.g. ~0.1-1.0%) when collecting data (I.E. z/OS Hardware Instrumentation Services – HIS), but this data can then be used to identify CPU “hot spots”, which can then be further analysed to identify the “areas of code” generating the high CPU usage.  However, it was forever thus, whether an APM tool, or CPU MF sampling data, high CPU usage can be identified, but the application programmer must undertake the task of optimizing the application code!

IBM have done a great job in providing CPU MF counters data, optimizing the Capacity Planning process with the SMF 113 record, and the realm of possibility exists with the sample data, but a software solution is required to analyse and summarize this data.

Currently there are very few if only one software solution that analyses CPU MF sample data, namely zHISR from Phoenix Software International.  zHISR interfaces directly with z/OS Hardware Instrumentation Services to collect data for hotspot analysis of customer, vendor, or operating system program execution.  zHISR features include:

  • Support for up to 128 simultaneous data collections events.  zHISR collections do not interfere with any HIS functions including sample or counter collection.
  • System console commands for many zHISR functions.
  • An Application Programming Interface to COBOL and Assembler for starting and stopping data collections. Collection lengths for API generated collections have a time range of one second or more.
  • Ability to schedule a collection with JCL so that collection starts when a given job or step begins.
  • Ability to store data collections as z/OS data sets or UNIX files.
  • Support for collections against CICS/TS transactions.
  • Analysis based on a time range within the collected data for a narrower spotlight on problem code.

An intuitive ISPF dialog allows the user to easily produce a CPU hot spots analysis, which can then be used for identifying the offending code sections.  The user can then drill down and highlight the high CPU CSECT and program offset (instruction), comparing with their Associated Data (ADATA), and thus the source programming instruction.  Therefore the skill required to perform analysis is minimal, as is the CPU overhead in collecting analysis data, and so eradicating the potential barriers when embarking on an application tuning initiative.  Furthermore, the actual cost of deploying the zHISR software is not onerous and so perhaps each and every committed Mainframe user can easily include application performance tuning into their application development lifecycle processes. 

zHISR has a UNIX file system interface that lets you navigate the system and browse or delete files.  With zHISR, users can start and stop hardware event data collections and view the status of the current or prior HIS run.  zHISR also includes a memory display/alter utility that lets you view main storage in the CPU you are logged on to.  If zIIPs are present and zHISR is defined as an authorized subsystem, nearly all of the CPU processing used by zHISR is redirected to a zIIP.

There are also instances, however few and far between, where Mainframe customers have written their own proprietary in-house OLTP (On-Line Transaction Processor) and Relational Database Management Subsystem (RDBMS), where traditional APM software tools can’t provide a solution, only interfacing with underlying subsystems (E.g. Adabas, CICS, DB2, IDMS, WebSphere, et al).  In these instances, CPU MF and zHISR offer a solution to help such customers, who probably face challenges when they upgrade their Mainframe servers, safeguarding software and application code is compatible with the new hardware, and ideally, exploits the latest functionality.

In conclusion, application performance tuning has to be a very important if not mandatory activity for the Mainframe Data Centre.  Whether via CPU MF or traditional APM software solutions, the cost reduction and performance improvement benefits of tuning should be compelling reasons to proactively engage in application tuning activities.  From a skills viewpoint, maybe the KISS (Keep It Simple Stupid) principle can apply, where CPU MF collects the data very simply and efficiently, complemented by zHISR, analysing the data in an intuitive and cost optimized manner.

So turning the subject matter on its head, Application Performance Tuning – Why Bother?  Why not!

Further information can be found from my z/OS Application Performance Tuning presentation, delivered at UK GSE in November 2012.