Pervasive Encryption & Compression: Why z15 Upgrade Activities Are Optimal & Strategic

A recent IBM Security sponsored Cost of a Data Breach report by the Ponemon Institute highlighted an average data breach cost of $3.86 Million.  Personally Identifiable Information (PII) accounted for ~80% of data breaches, exposing a range of between 3,400 & 99,730 compromised records.  The term Mega Breach is becoming more commonplace, classified as the exposure of 1+ Million records, where the average cost for such events increases exponentially, ~$50 Million for exposures up to 10 Million records, rising to ~$392 Million for exposures of 50+ Million records.  From an incident containment viewpoint, organizations typically require 207 days to identify & 73 days to contain a breach, totalling an average lifecycle of 280 days.  Seemingly the majority (I.E. 95%+) of data records breached were not encrypted.  I think we can all agree to agree, prevention is better than cure & the costs of these data breaches are arguably immeasurable to an organization in terms of customer trust & revenue downturn…

With the launch of IBM z14 in 2017, IBM announced its core CPU hardware included the Central Processor Assist for Cryptographic Function (CPACF) encryption feature embedded in the processor chip.  The ability to encrypt data, both at rest & in flight, for a low cost, was good news for IBM Z customers concerned about data security.  Classified as Pervasive Encryption (PE), the capability was designed to universally simplify data encryption processes, eradicating potential sources of data loss due to unwanted data breach scenarios.

It’s patently obvious that encryption inflates data & so we must consider the pros & cons of data compression accordingly.  An obvious downside of z14 data encryption is that it can render storage-level compression ineffective, because once the data is encrypted, it is not easily compressed.  A zEnterprise Data Compression (zEDC) card could be deployed to compress the data before encryption, but with added expense!  Wouldn’t it be good if data compression & encryption were performed on the CPU core?

For the IBM z15, with the Integrated Accelerator for zEnterprise Data Compression (zEDC), the industry standard compression used by zEDC is now built into the z15 core, vis-à-vis encryption via CPACF.  IBM z15 customers can now have the best of both worlds with compression, followed by encryption, delivered by the processor cores.  Therefore encryption becomes even less expensive, because after data compression, there is significantly less data to encrypt!

zEDC can perform compression for the following data classification types:

  • z/OS data (SMF logstreams, BSAM & QSAM data sets, DFSMShsm & DFSMSdss processing)
  • z/OS Distributed File Service (zFS) data
  • z/OS applications, using standard Java packages or zlib APIs
  • z/OS databases (Db2 Large Objects, Db2 Archive Logs, ContentManager OnDemand)
  • Network transmission (Sterling Connect:Direct)

Arguably the increase in remote working due to COVID-19 will increase the likelihood & therefore cost of data breaches & although encryption isn’t the silver bullet to hacking remediation, it goes a long way.  The IBM Z Mainframe might be the most securable platform, but it’s as vulnerable to security breaches as any other platform, primarily due to human beings, whether the obvious external hacker, or other factors, such as the insider threat, social engineering, software bugs, poor security processes, et al.  If it isn’t already obvious, organizations must periodically & proactively perform Security Audit, Penetration Test & Vulnerability Assessment activities, naming but a few, to combat the aforementioned costs of a security breach.

Over the decades, IBM Z Mainframe upgrade opportunities manifest themselves every several years & of course, high end organizations are likely to upgrade each & every time.  With a demonstrable TCO & ROI proposition, why not, but for many organizations, such an approach is not practicable or financially justifiable.  Occasionally, the “stars align” & an IBM Z Mainframe upgrade activity becomes significantly strategic for all users.

The IBM z15 platform is such a timeframe.  Very rarely do significant storage & security functions coincide, in this instance on-board CPU core data compression & encryption, eradicating host resource (I.E. Software, Hardware) usage concerns, safeguarding CPU (I.E. MSU, MIPS) usage optimization.  External factors such as global data privacy standards (E.g. EU GDPR, US PII) & associated data breach penalties, increase the need for strategic proactive security processes, with data encryption, high on the list of requirements.  Add in the IBM Z Tailored Fit Pricing (TFP) option, simplifying software costs, the need to compress & encrypt data without adding to the host CPU baseline, the IBM z15 platform is ideally suited for these converging requirements.  Pervasive Encryption (PE) was introduced on the IBM z14 platform, but on-board CPU core compression was not; GDPR implementation was required by 25 May 2018, with associated significant financial penalties & disclosure requirements; IBM Z Tailored Fit Pricing (TFP) was announced on 14 May 2019, typically based upon an MSU consumption baseline.

Incidentally, the IBM z15 platform can transform your application & data portfolio with enterprise class data privacy, security & cyber resiliency capabilities, delivered via a hybrid cloud.  Mainframe organisations can now get products & ideas to market faster, avoiding cloud security risks & complex migration challenges.  With the Application Discovery & Delivery Intelligence (ADDI) analytical platform, cognitive technologies are deployed to quickly analyse Mainframe applications, discovering & understanding interdependencies, minimizing change risk, for rapid application modernization. In conclusion, a year after the IBM z15 platform was announced in September 2019, field deployments are high, with the majority of promised function delivered & field tested.  With the ever-increasing cybersecurity threat of data breaches & an opportunity to simplify IBM Z software Monthly License Charges (MLC), a z15 upgrade is both strategic & savvy.  Even & maybe especially, if you’re using older IBM Z server hardware (E.g. z13, zxC12, z114/z196, z10, z9, et al), your organization can easily produce a cost justified business case, based upon reduced software costs, Tailored Fit Pricing or not, optimized compression & encryption, delivering the most securable platform for your organization & its customers.  Combined with proactive security processes to eliminate a myriad of risk register items, maybe that’s a proposition your business leaders might just willingly subscribe to…

Smartphone Security Dependency: Applying Mainframe Common Sense To Real Life…

I’m by no means a security expert, for that discipline we must acknowledge RSM Partners in the IBM Mainframe space & I congratulate Mark Wilson, their Management Team & personnel on their recent acquisition by BMC.

One way or another, for 25 years since 1995 I have been a carer for my parents who both died of brain cancer & dementia, my Father in 2003 & my Mother in the last few months.  Other than to pick up mail & perform minimal house maintenance duties, I haven’t lived at my house since October 2018.  Of all my achievements in life, keeping both of my parents out of a specialized care setting is without doubt my greatest, on my own, being a widow & having outlived my only sibling when I was 9 years old.  Indeed, when I look back on things, how I have managed to balance this family activity with any type of career development seems incredulous to me.  Perhaps I can now concentrate on my alleged Mainframer day job…

It’s amazing the skills you can learn away from your day job & even in recent bereavement, dealing with the bureaucracy of probate can teach you something, especially at this current juncture, where we finally seem to be in the midst of a widespread password to Multi-Factor Authentication (MFA) security evolution!

Having to deal with a probate estate, including property, there are some recurring costs you have to pay, primarily, power, water, telecommunications, local authority, et al, while you await grant of probate & eventually sell the house.  Of course, you need a bank account to do this & for want of a better term, I decided to make lemonade out of lemons for this seemingly mundane activity.  Currently, in the UK, many of the major current account providers want your business & offer switching inducements of ~£100-£175.  I have switched current accounts 3 times in the last few months, accumulating ~£500 that I have donated to a homeless charity.  As somebody much wiser than I once noted, there’s always somebody in a worse situation than you & having to face my first Christmas without a blood relative, this year I volunteered for said homeless charity, which once again, was a real eye opener.

What became obvious while I was subscribing to & switching from these largely UK clearing bank current accounts, was the changeover from a password & memorable information account authentication system, to a password & One Time Passcode (OTP) via Mobile Phone SMS (Text Message) protocol.  Each of these clearing banks deploy the latest IBM Z Mainframe for their System Of Record (SOR) data & security management, but technology doesn’t make for a bulletproof system, because as always, there is the human user of these systems.  My experiences of dealing with my elderly & frail Mother in her last few years then became pertinent, as in her heyday, Mum had the most amazing memory, used & commissioned mini computers herself in the early 1980’s, but the degeneration of her motor & neurological abilities, rendered her largely helpless trying to use a smartphone.  Of course, this will apply to many people, of all ages with health challenges various; do technology advances exclude them from 21st century technology & services?

In theory, hopefully most organizations are now realizing that passwords are a major vulnerability, at least from a human viewpoint & I guess us IT folks all know the statistics of how long it takes to crack a password of various lengths & character composition.  Even from my own viewpoint, for many years I have been using a Password Manager, where my password to access this system exceeds 50 characters in length.  I have tens of passwords in this system, I don’t know any of them, they’re all automatically generated & encrypted.  However, if this Password Manager is compromised, I don’t expose one resource, I expose tens!  Once again, even for this system, Multi-Factor Authentication via a password & One Time Passcode (OTP) via Mobile Phone SMS (Text Message) is the access protocol.  It then occurred to me, from a generic viewpoint, most security access systems ask you to register several pieces of memorable information; what’s your favourite book; mother’s maiden name; favourite sports team; pets name, et al.  Maybe, some of this information is duplicated & although not as vulnerable as having the same password for all of your account access, there’s a lot of duplicated personal information that could compromise many accounts…

Additionally, in the last several years, the evolution towards a cashless society has become more pervasive.  I myself use a mobile wallet, a mobile payment app, with NFC (Near Field Communication) for contactless payment convenience.  The convenience factor of these systems is significant, but once again, for those people with health challenges, can they easily use these systems?  We then must consider, how much information is accessed or even stored on a smartphone, to operate these financial accounts?

To recap, knowing the major UK banking institutions, I know my financial account password is stored in a secure Mainframe Server repository (I.E. ACF2, RACF, TopSecret) & associated account data is most likely protected at rest & in-flight via Pervasive Encryption (PE) or other highly secure encryption techniques.  However, to access these highly secure Mainframe systems, the client I’m using is a smartphone, with a hopefully highly secure Operating System, Mobile Banking App & Password Manager.  If I’m a bad actor, historically I would try to hack the most pervasive Operating System of the time, Microsoft Windows via desktop & laptop PC’s.  Today, perhaps I’m going to focus on the most pervasive client, namely mobile devices, typically operating via iOS & Android.  Of course, it’s no surprise that are increasing reports & activity of security exposures in these mobile operating systems & associated web resources, servers & browsers.

Additionally, in recent times, a well know financial institution was compromised, revealing the key personal information of ~145 Million US citizens, due to the well-known “Apache Struts” vulnerability.  This financial institution does deploy an IBM Mainframe, which historically would have afforded a tightly controlled Mainframe centric environment with no public Internet links; evolving to a decentralized environment, maybe globally outsourced, with a myriad of global Internet connected devices.  If only we could all apply the lessons & due diligence measures learned over the many decades of our IBM Mainframe experience.  However, this notable data breach happened at an organization that had been deploying a Mainframe for decades, proving that it’s human beings that typically make the most costly high profile mistakes!

Being a baby boomer & a proud Mainframer, I know what can go wrong & have planned accordingly.  I have separate accounts for mobile contactless payments, credit as opposed to debit based & more than one bank current account.  Whether by account isolation or the Consumer Credit Act, I’m limiting or eliminating any financial loss risk should my smartphone or financial account information be compromised.  For belt & braces protection, I always carry a modicum of cash, because how many times, even recently, have Mainframe based banks had card processing or cash machine access outages?  I’m just applying life experience & business continuity to my own daily life requirements, but how many people in the general public apply these due diligence measures?  Once again, please consider these members of the general public might be your family member, an inexperienced child or young adult, or more likely, perhaps a vulnerable aging parent.

Once again, applying my Mainframe Disaster Recovery & Business Continuity experience, how can I safeguard 99.999%+ availability for my day-to-day life if my smartphone is lost or Password Manager is compromised?  It’s not easy, a standby phone, sure, but what is the cost of the latest premium smartphone; how easy is it to synchronize two Password Manager solutions, from different software providers?  From my viewpoint, this is somewhat analogous to the IBM Mainframe hot versus warm or cold start DR process.  If you want high availability, you have to duplicate your expensive hardware, in the unlikely event you suffer a hardware outage.  Unlike the IBM Mainframe System Of Record (SOR) data, where of course must have the same software & data on both system images, if somebody compromises your Password Manager, was that a human or software error?  I don’t have the answers, I just try to apply due diligence, but I’m not sure how many members of the general public possess the life & vocational experience a Mainframe baby boomer has.

Without doubt, eliminating passwords is a great step forward, but is Multi-Factor Authentication (MFA) the “silver bullet”; I don’t think so.  Humans beings are just that, human, born to make mistakes.  Software is just that; prone to bugs & exposures, inadvertent or otherwise.  Centralizing your whole life in a smartphone has many advantages, but is it as vulnerable as keeping your life savings under the mattress?

Finally, thank you Mum & Dad for giving me this life opportunity & showing me dignity & strength in your dying days.  Thank you to the Mainframe community for providing me with so many opportunities to learn.  Maybe you can all give something back to the wider world for the causes that mean something to you.  The local charity I discovered & supported was the Northampton Hope Centre that tackles poverty & homelessness.  There but for the grace of god certainly applies to all of us, at one time or another, so let’s try & support as many people we can, those close to home & those in need.  It only occurred to me when I lost my Mother that eventually, if we live long enough, we all become orphans & a few weeks before I became an orphan, Coldplay released a song, Orphans.  There’s a line in that song, “I want to know when I can go, back & feel home again”.  For me, hopefully after about 18 Months, the end of March 2020 might be that day!

Enabling IBM Z Security For The Cloud: Meltdown & Spectre Observations

The New Year period of 2018 delivered unpleasant news for the majority of IT users deploying Intel chips for their Mission Critical workloads.  Intel chips manufactured since 1995 have been identified as having a security flaw or bug.  This kernel level bug has been identified as leaking memory, allowing hackers access to read sensitive data, including passwords, login keys, et al, from the chip itself.  It therefore follows, this vulnerability allows malware inserts.  Let’s not overlook that x86 chips don’t just reside in PCs, their use is ubiquitous, including servers, the cloud and even mobile devices and the bug impacts all associated operating systems, Windows, Linux, macOS, et al.  Obviously, kernel access just bypasses everything security related…

From a classification viewpoint, Meltdown is a hardware vulnerability affecting a plethora of Intel x86 microprocessors, ten or so IBM POWER processors, and some ARM and Apple based microprocessors, allowing a rogue process to read all memory, even when not authorized.  Spectre breaks the isolation between different applications, allowing attackers to trick error free programs, which actually follow best practices, into leaking sensitive data and is more pervasive encompassing nearly all chip manufacturers.

There have been a number of software patches issued, firstly in late January 2018, which inevitably caused other usability issues and the patch reliability has become more stable during the last three-month period.  Intel now claim to have redesigned their upcoming 8th Generation Xeon and Core processors to further reduce the risks of attacks via the Spectre and Meltdown vulnerabilities.  Of course, these patches, whether at the software or firmware level are impacting chip performance, and as always, the figures vary greatly, but anything from 10-25% seems in the ball-park, with obvious consequences!

From a big picture viewpoint, if a technology is pervasive, it’s a prime target for the hacker community.  Windows being the traditional easy target, but an even better target is the CPU chip itself, encompassing all associated Operating Systems.  If you never had any security concerns from a public cloud viewpoint, arguably that was a questionable attitude, but now these rapidly growing public cloud providers really need to up their game from an infrastructure (IaaS) provision viewpoint.  What other chip technologies exist that haven’t been impacted (to date), by these Meltdown and Spectre vulnerabilities; IBM Z, perhaps not?

On 20 March 2018 at Think 2018 IBM announced the first cloud services with Mainframe class data protection:

  • IBM Cloud Hyper Protect Crypto Services: deliver FIPS 140-2 Level 4 security, the highest security level attainable for cryptographic hardware. This level of security is required by the most demanding of industries, for example Financial Services, for data protection.  Physical security mechanisms provide a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access.  Hyper Protect Crypto Services deliver these highest levels of data protection from IBM Z to IBM Cloud.  Hyper Protect Crypto Services secures your data in a Secure Service Container (SSC), providing the enterprise-level of security and impregnability that enterprise customers have come to expect from IBM Z technology.  Hardware virtualisation protects data in an isolated environment.  SSC safeguards no external data access, including privileged users, for example, cloud administrators.  Data is encrypted at rest, in process and in flight.  The available support for Hardware Security Modules (zHSM) allows for digital keys to be protected in accordance with industry regulations.  The zHSM provides safe and secure PKCS#11 APIs, which makes Hyper Protect Crypto Services accessible by popular programming languages (E.g. Java, JavaScript, Swift, et al).
  • IBM Cloud Hyper Protect Containers: enable enterprises to deploy container-based applications and microservices, supported through the IBM Cloud Container service, managing sensitive data with a security-rich Service Container Systems environment via the IBM Z LinuxONE platform. This environment is built with IBM LinuxONE Systems, designed for EAL5+ isolation and Secure Services Containers technology designed to prevent privileged access from malicious users and Cloud Admins.

From an IBM and indeed industry viewpoint, security concerns should not be a barrier for enterprises looking to leverage from cloud native architecture to transform their business and drive new revenue from data using higher-value services including Artificial Intelligence (AI), Internet of Things (IoT) and blockchain.  Hyper Protect Crypto Services is the cryptography function used by the that IBM blockchain platform.  The Hyper Protect Crypto Services – Lite Plan offers free experimental usage of up to 10 crypto slots and is only deleted after 30 days of inactivity.

In a rapidly changing landscape, where AI, Blockchain and IoT are driving rapid cloud adoption, the ever-increasing cybersecurity threat is a clear and present danger.  The manifestation of security vulnerabilities in the processor chip, whether Apple, AMD, Arm, IBM, Intel, Qualcomm, et al, has been yet another wake-up alert and call for action for all.  Even from an IBM Z ecosystem viewpoint, there were Meltdown and Spectre patches required, and one must draw one’s own conclusions as to the pervasive nature of these exposures.

By enabling FIPS 140-2 Level 4 security via Cloud Hyper Protect Crypto Services and EAL5+ isolation via Cloud Hyper Protect Containers IBM Z LinuxONE, if only on the IBM Cloud platform, IBM are offering the highest levels of security accreditation to the wider IT community.  Noting that it was the Google Project Zero team that identified the Meltdown and Spectre vulnerability threats, hopefully Google might consider integrating these IBM Z Enterprise Class security features in their Public Cloud offering?  It therefore follows that all major Public Cloud providers including Amazon, Microsoft, Alibaba, Rackspace, et al, might follow suit?

In conclusion, perhaps the greatest lesson learned from the Meltdown and Spectre issue is that all major CPU chips were impacted and in a rapidly moving landscape of ever increasing public cloud adoption, the need for Enterprise Class security has never been more evident.  A dispassionate viewpoint might agree that IBM Z delivers Enterprise Class security and for the benefit of all evolving businesses, considering wider and arguably ground-breaking collaboration with technologies such as blockchain, wouldn’t it be beneficial if the generic Public Cloud offerings incorporated IBM Z security technology…

DB2: Internal Subsystem Security vs. External Security Manager (ESM)?

With the ever increasing requirement for regulatory compliance and the clear and present danger associated with cybersecurity attacks, isn’t now the best time to safeguard your organizations most important asset, namely business data?  Various industry analyst quotes state that ~80%+ of Mainframe data resides in databases and associated data sources and ~80%+ of global corporate data originates or resides on IBM Mainframes.  Depending on your viewpoint, rightly or wrongly DB2 is the most pervasive of database subsystems, offering two mechanisms for security, either internal subsystem or External Security Manager (ESM) based via ACF2, RACF or Top Secret.  When DB2 was first released in 1983, Mainframe security was in its infancy and perhaps even an afterthought, and so implementing internal DB2 security might have been the typical approach for many years.  Some several decades later, asking that age old rhetorical question; what is the best security solution for my mission critical and priceless data?  I’m not sure it is a rhetorical question, the answer is patently obvious, external security!

RACF and DB2 security integration was introduced in 1997 with OS/390 2.4 and DB2 Version 6 and so a ~14 year period where DB2 internal security was the only option!  Personally, ~20 years ago I was involved with an internal DB2 to RACF security migration project, part of a larger Operating System, DB2 and indeed CICS upgrade.  Basically the DB2 DBA team stated “we would have never implemented internal DB2 security if the RACF option was available; can you migrate to RACF for us”?  The simple reality being that Security Management is not a core DBA skill and such a process is ideally delivered by a Subject Matter Expert (SME).  Of course, DB2 was somewhat straightforward ~20 years ago, as were its security features, but in the last ~20 years, DB2 has become more complex and enterprise wide, while I’m often surprised by the number of organizations I encounter, both small and large, still deploying internal DB2 security…

Recognizing a ~20 year longevity period of RACF security support for DB2, maybe even the most conservative of organizations might concede that the technology is proven and works?  From a business viewpoint, such a migration from DB2 internal to an External Security Manager (ESM) is the proverbial “no brainer”, because:

  • Subject Matter Expert (SME): Clearly all IBM Mainframe organizations now have dedicated security professionals who are ideally placed to implement enterprise wide security policies. A DB2 DBA is a highly skilled SME in their own discipline, most likely welcoming the migration of security from DB2 to ACF2, RACF or Top Secret.
  • Compliance: A plethora of industry regulations, including but not limited to GLB, SOX, PCI-DSS, et al, dictate that a hybrid of technical skills and business policy knowledge is required. This has generated a requirement for the executive level CISO role and associated security certifications (E.g. CISA, CISM, CISSP) for SME resources.
  • Auditability: From a board level CxO viewpoint, which technical resource would you want responsible for your security policy, the CISO/CIO and their security engineers or a DB2 DBA?
  • Hacking-Penetration Testing: Rightly or wrongly, rightly in my opinion, Penetration (Pen) Testing is a methodology to try and hack a system in order to highlight security vulnerabilities, supplementing the traditional periodic audit processes. Once again, high levels of security expertise are required for such activities.

From a technical viewpoint, what is the complexity of performing a DB2 internal to RACF external security migration?

From a DB2 viewpoint, internal security rules are stored in DB2 catalog tables with the SYSIBM.SYSxxxAUTH naming convention.  Therefore these data repositories can be processed with a simplistic DB2 to RACF security migration tool (RACFDB2).  As per any migration activity, Garbage In, Garbage Out (GIGO) applies, and this golden rule dictates the requirement for a collaborative team effort to execute a DB2 to RACF security migration process.  Of course, the most important resources will be the DB2 DBA(s) responsible for maintaining DB2 security and a RACF SME.  Between them, these 2 resources have all of the skills required to perform this migration process, if not the experience.

From a documentation viewpoint, there are several resources that can be referenced to simplify this process:

The purpose of this blog post is a “call to action”, for those sites still deploying DB2 internal security, to migrate to their External Security Manager (ESM), whether ACF2, RACF or Top Secret.  There are also options for the migration of internal DB2 security to CA ACF2 and Top Secret respectively.

As previously stated, the DB2 DBA will be ideally placed to review the existing internal DB2 security environment, performing any clean-up and rationalization before the actual migration process.  The initial pass of the migration process will inevitably produce a one:one (1:1) mapping of rules, generating numerous security definitions extraneous to requirements.  This is where the ACF2, RACF or Top Secret SME can collaborate with their DB2 DBA, applying grouping, masking and generic filters to vastly reduce and simplify the number of security definitions required.  As with any migration, perform on the lowest level non-Production environment first, apply the lessons learned, and use common sense, issuing warning messages for inadvertent security policy errors, as opposed to denying security access for Production migrations!  Therefore allowing for the smooth transition from DB2 internal to ESM based security.

In my opinion, each and every IBM Mainframe organization has the ability to initiate this DB2 internal to external ACF2, RACF or Top Secret migration project.  Leveraging from 3rd party organizations also makes sense and in no particular order, other than alphabetical, I would suggest IBM Global Services, millennia, RSM Partners or Vanguard.

In conclusion, the IBM System z External Security Manager (ESM), whether ACF2, RACF or Top Secret is an ever-evolving solution with highly advanced security functionality and the de facto central repository for IBM Mainframe security policy management.  From a Security Information & Event Management (SIEM) integration viewpoint, any IBM Mainframe security policy violations will be reported upon in near real-time, while being managed by IBM Mainframe security experts.  Without doubt, if DB2 was implemented before 1997, internal security would have applied, but there has been a ~20 year period where migration to the ACF2, RACF or Top Secret ESM could have happened.  If such a migration activity applies to your organization, I would hope it’s a high priority item, given the potential security risk and priceless value of your business data!

The IBM Mainframe: Just Another Node On The IP Network!

With the introduction of MVS/ESA Version 4.3 in 1993, the IBM Mainframe included the major foundations for meaningful Distributed Systems connectivity, including the first steps of POSIX compliance via OpenEdition functionality.  However, even before this timeframe, the TCP/IP protocol was available in the first release of MVS/ESA Version 4 (4.1), although in a very limited fashion.  In this instance, MVS was benefitting from the path already trodden by the VM Operating System and the TCP for VM software product.  Put another way, even when TCP/IP was in its early stages, being deployed and evolved in universities and scientific laboratories (E.g. CERN), its foundation was being embedded into the IBM Mainframe.

Early IBM Mainframe TCP/IP usage allowed for RS/6000 (AIX) connectivity, LAN integration via Novell NetWare, typically via the 3172 Interconnect Controller, Sockets Interface (E.g. CICS), et al.  In 1994, IBM introduced the Open Systems Adapter (OSA) processor feature for S/390 Parallel Enterprise Servers.  The OSA provided native Open Systems connectivity to the Local Area Network (LAN), directly via the Mainframe processor.  The OSA feature supported the Fiber Distributed Data Interface (FDDI), Token-Ring & Ethernet LANs, arguably making the 3172 controller obsolete.

So, since the early-mid 1990’s, even before pervasive usage of the Internet, the Mainframe was already a fully functioning and efficient user of IP networking.

How is the TCP/IP function being utilized by the IBM Mainframe today?

TCP/IP on z/OS supports all of the well-known server and client applications.  The TCP/IP started task is the engine that drives all IP-based activity on z/OS.  Even though z/OS is an EBCDIC host, communication with ASCII-based IP applications is seamless.

IP applications running on z/OS use a resolver configuration file for environmental values.  Locating a resolver configuration file is somewhat complicated by the dual operating system nature of z/OS (UNIX and MVS).  Nearly each and every z/OS customer deploys the following core TCP/IP services:

TCP/IP Daemon: The single entity that handles, and is required for, all IP-based communications in a z/OS environment is the TCP/IP daemon itself.  The TCP/IP daemon implements the IP protocol stack and runs a huge number of IP applications to the same specifications as any other operating system.

TCP/IP Profile: Is loaded by TCP/IP when started.  If a change needs to be made to the TCP/IP configuration after it has been started, TCP/IP can be made to reload the profile dynamically (or read a new profile altogether).

FTP Server: Like some other IP applications, FTP is actually a z/OS UNIX System Services (USS) application.  It can be started within an MVS environment, but it does not remain active in z/OS.  It immediately forks itself into the z/OS UNIX environment and tells the parent task to kill itself.

Telnet Daemon: There are two telnet servers available in the z/OS operating environment.  One is the TN3270 server, which supports line mode telnet, but it is seldom used for just that.  Instead, it is primarily used to support the TN3270 Enhanced protocol. The other telnet server is a line mode server only, referred to as the z/OS UNIX Telnet server (otelnetd).

Many IBM and ISV software products exploit IP and USS functionality, most typically WebSphere (MQ).

Whether UNIX System Services (USS) or TCP/IP usage, the convergence of the IBM Mainframe and UNIX technologies arguably became mandatory with the deployment of TCP/IP on the IBM Mainframe.  Obviously the technical personnel that support these different platforms have their own viewpoint as to which platform might be the best, but that is somewhat of an arbitrary point.  However, what is absolutely certain is recognition of how data is stored and secured in a UNIX environment and indeed the z/OS (MVS) specific environment, originally named MVS OpenEdition, but now commonly referred to as OMVS.

There are fundamental differences too numerous to mention when comparing the User and File management policies and processes, when comparing the security and data access lifecycle intricacies of z/OS and UNIX.  So what you might say!  This might be a cursory and lax attitude, as business critical data is probably being stored in OMVS file systems, if only for FTP purposes, but more than likely for other more pervasive and user based access (E.g. Database, Messaging, Data Mining, Data Exchange, et al).

So, which technical party is managing the security of Unix System Services (USS) file systems for the OMVS Mainframe deployment?  Is it the Mainframe Systems Programmer, the Unix System Admin or the Mainframe Security Team, or somebody else?  To date, some people might have thought it didn’t matter, but of course, seasoned security professionals knew that this was never the case.  However, the migration to z/OS 2.1 is a tangible juncture for each and every IBM Mainframe installation to review their USS and thus OMVS security deployment.  Why?

The BPX.DEFAULT.USER facility was introduced with OS/390 2.4 and was a commonly used process for implementing USS (OMVS) security.  However, with z/OS 2.1, the BPX.DEFAULT.USER facility is withdrawn, meaning that the Mainframe user must perform some migration actions.  IBM provide some generic assistance with this challenge via APAR OA42554 and APAR OA37164.  However, maybe this is an ideal juncture to perform a thorough review of USS (OMVS) security, vis-à-vis a comprehensive and dispassionate audit, highlighting issues, implementing standards and securing exposures.  For example, use of UID(0) must be eradicated and certainly no human being should be allocated such privileges.

There are some useful guidelines available from security specialists such as Vanguard, where the process can be simplified using their Identity & Access Management (IAM) toolset.  Similarly, recent user conferences have included presentations on this subject matter.

In conclusion, the IBM Mainframe can be classified as just another node on the IP (TCP/IP) network.  However, as always, no matter how secure the Mainframe platform might be, the biggest threat is typically the human being, and for USS, the migration to z/OS 2.1 forces us to review OMVS security settings.  Therefore, let’s do a good job and eradicate any security exposures we might have inadvertently implemented over the years.  As we all know, passing an external security audit process doesn’t necessarily mean our IT systems and processes are secure, while sometimes the internal security people are better qualified or more knowledgeable than external auditors.  Arguably most external auditors will do a good job of auditing UNIX platforms, yet their Mainframe knowledge and abilities are typically limited.  It is therefore somewhat of a paradox that in this particular area of z/OS USS, the typical UNIX exposures are not highlighted in the typical Mainframe security audit process…

One must draw one’s own conclusions as to the merits of engaging 3rd Mainframe security specialists to perform such an audit, coinciding with this z/OS 2.1 migration activity, safeguarding that OMVS security and processes are as good and secure as they can be.  Put another way, why wouldn’t a Mainframe organization go that extra mile to safeguard their most valuable of assets, namely business critical data, engaging a 3rd party specialist to review and provide guidance on this subject matter.

Is The Mainframe A Good Repository For Enterprise Wide User Passwords?

The subject matter of creating and maintaining passwords is arguably infinite and for the purposes of this article, we will provide a concise review…

In an ideal world, strong multiple factor authentication techniques would be deployed for every user authentication access attempt, including:

  • Biometrics – Unique measurable attribute (E.g. Voice, Fingerprint, Retina, et al)
  • Tokens – A physical device (E.g. Smart Card, One Time Password, et al)
  • User Secret – Something you know (E.g. Password, Phrase, PIN, et al)

Obviously the more authentication techniques used in combination, the stronger the authentication process becomes!

Primarily due to cost and complexity, passwords remain the most pervasive form of user authentication.  This simple fact in itself exposes the human being as the primary vulnerability in safeguarding access to business systems.

However, passwords are simply just words, phrases or a string of characters that can be easily remembered by the user.  As such, passwords can be compromised in numerous scenarios, for example:

  • Hardcopy – The written word; users write them down and/or share them with others.
  • Cracking – Passwords can be guessed; typically a simple program designed to try many possibilities in rapid succession.  Simple passwords might be guessed by another human being.
  • Unsecure Transmission – Passwords no matter how complex are transmitted over an unsecure network in a simplistic (E.g. text) form, or with basic encoding, which can be easily converted to text.
  • Inappropriate Storage – Passwords are stored on a server, fixed or removable media storage, in a simplistic (E.g. text) form, or with basic encoding, which can be easily converted to text.

These potential vulnerabilities generate possibilities for somebody to obtain a password and subsequently access a business system as the user associated with their password.  The potential consequences are obvious, depending on the importance of the user…

However, if password systems are implemented to deny malicious attacks, inspection or decryption of passwords being transmitted over the network, or at rest on fixed or removable storage media; passwords can be very secure.  Therefore a combination of technology and good practice is required, safeguarding compliant and latest technology systems are deployed, educating users not to be the point of vulnerability, by allowing others to easily access their password.

There might be some urban myths as to whether the IBM Mainframe is a good platform for enterprise wide password management, for example:

  • Sniffing For Mainframe Passwords (This scenario depends on the lack of an SSL infrastructure)
  • CRACF (This Mainframe password cracking utility identifies simple user/password/group vulnerabilities)

Both of these scenarios are examples of whether “reverse engineering” thinking is good practice.  So let’s pose as a potential hacker and see if we can obtain a user and associated password.  These scenarios highlight the combined requirement of deploying a secure environment and safeguarding that user’s don’t and indeed are not allowed to create simplistic (low strength) passwords.

Ultimately password strength is governed by password length and associated combination of characters, including alphanumeric, upper/lower case, special characters, et al.  There are also some other urban myths regarding the IBM Mainframe, regarding the maximum length of password (E.g. 8 Characters) and the type of character supported (E.g. only alphanumeric uppercase).  For many years, RACF has supported the password phrase extension to the password rules, increasing password length to 100 characters:

  • Maximum length: 100 characters
  • Minimum length: 9 characters, when ICHPWX11 is present and allows the new value or 14 characters, when ICHPWX11 is not present
  • The user ID (as sequential upper case characters or sequential lower case characters) is not part of the password phrase
  • At least 2 alphabetic characters are specified (A – Z, a – z)
  • At least 2 non-alphabetic characters are specified (I.E. numeric, punctuation, special characters, blanks)
  • No more than 2 consecutive characters are identical

The use of high strength passwords is required because although human beings might give up after trying tens or maybe hundreds of password guesses, automated programs can achieve millions of password access attempts in a second, for example:

There will always be a debate as to whether Single Sign On (SSO) or password synchronization is the best solution for maintaining password integrity and both solutions have their merits.  Once again, a multiple authentication factor solution increases the security strength of either solution.

Passwords are most vulnerable when they’re forgotten and intervention is required to reinstate the password.  Traditionally password resets were performed by an IT Support resource (human being) and this human interaction process generates what are termed “social engineering” challenges.  Let’s explore a typical scenario, while considering any exposure and circumvention techniques:

Password Reset: IT Support Process

  • User has forgotten or mistyped their password (log-in denial/intruder alert)
  • User contacts IT support function (might encounter a no response or queue waiting scenario)
  • IT support asks user for credentials (E.g. name, department, et al)
  • IT Support authenticates this information with some on-line resource/authenticates user
  • IT support resets password or not, depending on whether user is “manually” authenticated
  • User might be prompted to immediately change their password on first successful log-in attempt

The security weaknesses associated with this process are numerous and prone to human error, for example:

Obvious Security Weaknesses: Business Exposure

  • IT Support forgets to authenticate the user
  • On-line resources for authenticating the user are not available
  • User credentials are widely available and so “social engineering” exposes the system
  • Password reset authority is granted to many non-IT personnel, for work productivity reasons
  • Password reset activity is not tracked and so is not auditable, accountable or traceable
  • IT support now knows the user password

Having identified the potential simplistic vulnerabilities, we implement processes to eradicate them, for example:

Implementing Controls

  • IT support training to safeguard user authentication occurs for each and every password reset request
  • Safeguard sufficient and secure user authentication information is available to IT support personnel
  • Implement a password reset solution/process (E.g. software) to eliminate non-IT personnel password reset personnel (I.E. for non-standard scenarios)
  • Implement a self-service solution (E.g. software) that allows the user to change their passwords, based on previously supplied “security challenge” questions and answers

Where user authentication depends on a password, eliminating “human” intervention touch points wherever possible is mandatory, minimizing the opportunity for “social engineering” techniques to compromise security.  We have also identified that the IBM Mainframe does offer a secure environment for retaining passwords with ultra-high-strength security and that as always, the IBM Mainframe remains difficult to hack…

There are many software products to assist password reset scenarios, some that are platform specific and some that don’t support the IBM Mainframe.  For those customers with an IBM Mainframe, Vanguard PasswordReset is an enterprise wide self-help password reset solution.

Vanguard PasswordReset addresses the common problem of forgotten or expired passwords, allowing authorized users to quickly and securely change their passwords at any time without help desk intervention.

Easy to install and use Vanguard PasswordReset does not require any software on user workstations or any additional hardware, with a rigorous set of checks and balances to ensure that only authorized users can initiate password reset requests.

Users register with the Vanguard PasswordReset website by typing a series of questions and answers or answering a set of predefine questions. When users want to change their passwords, they log on to the Vanguard PasswordReset website, type the answers to the questions and reset their passwords.  For increased security, Vanguard PasswordReset allows system administrators to set the number of questions that must be answered and other characteristics of the answers.

A self-service password solution such as delivers Vanguard PasswordReset the following benefits:

  • Eliminates lost productivity when users are unable to access computer applications.
  • Provides improved help-desk productivity by allowing support staff to concentrate on solving other issues rather than time-consuming password resets.
  • Enhances enterprise security by standardizing password reset activities and eliminating human error.
  • Reduces IT support costs by automating costly password resetting activities.
  • Helps retain customers by making it easier for them to access extranet and e-business environments.
  • Virtually eliminates actual or hidden costs associated with installing, administering, maintaining and retiring thin-client software on user work stations.

In conclusion, maintaining passwords for user authentication purposes is a complex, costly and all-encompassing activity.  Eradicating human intervention and touch points wherever possible, minimizes the impact of “social engineering” attacks, while deploying highly secure software solutions further increases the integrity of the primary access method to mission-critical business data, namely user access via authentication.

Data Destruction: Is Your IBM Mainframe Mission Critical Data Always Safe?

Data loss incidents expose businesses and their partners, customers and employees to a plethora of risks and associated problems.  Typically, opportunistic, unauthorized or rogue access to sensitive, personal, confidential and Mission Critical data all too often results in identity theft, competitive business challenges, naming but a few, which adversely impact many areas, including but not limited to:

  • Business reputation and perception
  • Monetary via noncompliance penalties and associated litigation
  • Media coverage
  • Personal consumer credit ratings

Unless businesses implement proactive processes to secure data from creation to destruction, vis-à-vis cradle to grave, data loss challenges might ensue.  In fact, millions of individuals are impacted by data loss every year, as criminals increase their sophistication for gaining unauthorized information access.  The increasing dependence on technology and the potential associated collateral damage risk will continue to grow exponentially.  Thus today there is no such thing as the low-risk organization or low-risk personal information and so it follows that business trustworthiness and data security should be a primary concern.

The full and complete destruction and thus secure erasure of data is a mandatory requirement of both Business and Government regulations, in addition to those policies deployed by each and every business.  Regulatory compliance examples include the EU Data Protection Directive, Payment Card Industry Data Security Standard, Sarbanes-Oxley Act, supplemented by many other compliance mandates, encompassing The UK, Europe, The USA and indeed globally.  There are many occasions when data destruction is required, for example:

  • When disks move to another location for reuse or interim storage
  • When a lease agreement matures and disks are returned to the vendor or sold onto the 2nd user market
  • Following a Disaster Recovery (DR) test, where 3rd party disk and tape devices are used for testing purposes
  • The reuse of disk or tape by a different company group
  • Before discarding and thus scrapping disks and tapes that are to leave the Data Centre

Specifically the Payment Card Industry Data Security Standard states:

9.10.2 Render cardholder data on electronic media unrecoverable so that cardholder data cannot be reconstructed; Verify that cardholder data on electronic media is rendered unrecoverable via a secure wipe program in accordance with industry-accepted standards for secure deletion, or otherwise physically destroying the media (for example, degaussing).

Similarly, NIST Special Publication 800-88 (Guidelines for Media Sanitization) states:

Clear; One method to sanitize media is to use software or hardware products to overwrite storage space on the media with non-sensitive data.  This process may include overwriting not only the logical storage location of a file(s) (e.g., file allocation table) but also may include all addressable locations.  The security goal of the overwriting process is to replace written data with random data.  Overwriting cannot be used for media that are damaged or not rewriteable.  The media type and size may also influence whether overwriting is a suitable sanitization method [SP 800-36: Media Sanitizing].

These data erasure (destruction, cleaning, clearing, wiping) methods would be a pre-cursor to supplementary actions such as purging, destroying and disposal of storage media and devices.

Clearly the IBM Mainframe environment is no different to any other, the requirement to safeguard data is always secure is of paramount importance, while being a mandatory requirement.  Each and every Mainframe data centre will from time-to-time complete some or all of the following activities:

  • Replace tape media (E.g. upgrade, damage, replacement activity, et al)
  • Replace disk subsystems (E.g. upgrade, end of lease, replacement activity, et al)
  • Disaster Recovery test (E.g. utilize 3rd party Data Centre for data restoration)

The major consideration to safeguard data security in these instances is when the data and or related storage media is moved off-site, outside of the primary Data Centre infrastructure.  So maybe we should ask ourselves, why is it when we send data electronically, we safeguard the data with encryption, but when the data or related storage media is physically moved outside of the Data Centre, we don’t necessarily apply the same high levels of data security?

There are z/OS guidelines provided for erasing disk data, primarily relating to use of the ICKDSF TRKFMT function with the ERASEDATA and CYCLES parameters.  It is generally accepted that this process is slow and does not erase data to the exacting standard required by regulatory compliance requirements.  Similarly, DFSMSrmm users can use the EDGINERS ERASE function to erase data and even shred encryption keys for cartridge volumes created by high-function cartridge subsystem drives (I.E. TS1120, TS1130), but once again, this process might be considered slow and limited to those users deploying the DFSMSrmm subsystem, when other Tape Management Subsystems are widely deployed (E.g. AutoMedia/ZARA, CA-1, CA-Dynam/TLMS, CONTROL-T, et al).

There are other options available from the ISV market that have been specifically developed to erase data securely, including FDRERASE or SAEerase for disk data and FATS/FATAR for tape data, but wouldn’t it be useful if there was one software product that could erase both disk and tape data for IBM Mainframe environments?

Unlike other competitive solutions that are specialized for one particular storage media, either disk or tape, XTINCT performs a secure data erase for both disk and tape data.  XTINCT meets all the requirements of US Department of Defense 5220.22-M (Clearing and Sanitization Matrix for Clearing Magnetic Disk) by overwriting all addressable locations with a single character.  XTINCT also meets the sanitization requirement by overwriting all addressable locations with a character, its complement, then a random character, followed by final verification.  XTINCT meets the requirements of most users by overwriting the tape and use of the hi-speed data security erase patterns.  It should be noted, for tapes, the DoD only considers degaussing or pulverizing the tape to be a valid erase!

In addition to providing a complete audit trail and comprehensive reports to satisfy regulators, XTINCT surpasses NIST guidelines for cleaning and purging data.  XTINCT also satisfies all federal and international requirements including Sarbanes-Oxley Act, HIPAA, HSPD-12, Basel II, Gramm-Leach-Bliley and other data security and privacy laws.

From a resource efficiency viewpoint, XTINCT is re-entrant and fully supports sub-tasking.  Multiple volumes can be processed asynchronously, whereas other tools, like ICKDSF, run serially.  XTINCT makes extensive use of channel programs, so many functions operate at peak efficiency by only using enough CPU time to generate the channel programs, with the rest of the operation being carried out by the channel subsystem.  This dictates that XTINCT does not overly utilize valuable CPU time.

The method chosen to safeguard that Mainframe disk and tape data is securely erased, destroyed, cleaned, purged, and so on, is somewhat arbitrary, whereas the deployment of the actual process is required, primarily from a business viewpoint, protecting their valuable consumer data in all circumstances, regardless of the mandatory regulatory security requirements.  Ultimately the Mainframe Data Centre just needs to make a minor enhancement to the data management lifecycle model, to guarantee data security in all circumstances, in this instance, when physical data media (I.E. disk, tapes) moves outside of the primary Data Centre location.

What is my RACF technical policy? Could it be NIST DISA STIG based?

In the late 1980’s I was lucky enough to work at a Mainframe site that was performing early testing for MVS/ESA and DFP Version 3, namely DFSMS. So what you say! The main ethos of DFSMS was in fact System Managed Storage and the ability to define policies to manage data, and the system (DFSMS) would implement these policies. Up until this point, there was no easy way of controlling data set allocation and managing storage space.

Conversely, the three mainstream Mainframe security subsystems, in no particular and so alphabetical order, ACF2, RACF (Security Server) and Top Secret have always had the ability to define a security policy and for said policy to be processed as and when the associated resource was accessed. So why is it that so many security risk registers are full of “things to do” from a security viewpoint? Where did it all go wrong?

In the late 1980’s, Guide and SHARE, in Europe anyway, were separate entities, and these organizations had some influence with IBM regarding direction for various IBM Mainframe technologies. Not much has changed as of today, but the organizations have merged and for Europe, now we have GSE. From a DFSMS viewpoint, there was a significant amount of user input regarding how DFSMS might be shaped, and the “System Managed Storage” ethos. I wonder whether such user input, or indeed focussed collaboration from Mainframe security gurus might help with the RACF or Mainframe security technical policy challenge?

Having worked with multiple IT security focussed organizations (E.g. NIST, DoD) over the last few decades, Vanguard Integrity Professionals has been actively involved in creating and evolving NIST DISA STIG Checklists. These checklists (currently 300+ checks and growing steadily) provide a comprehensive grounding for z/OS RACF policy checking, and seemingly are gaining momentum in being accepted as a good starting point to assist organizations define and monitor their z/OS RACF (ACF2 & Top Secret in the near future) policy.

These DISA STIG checklists contain step-by-step instructions for customer usage to ensure secure, efficient, and cost-effective information security that is fully-compliant with recognized security and therefore Mainframe security standards. Being fully compliant with DoD DISA STIG for IBM z/OS Mainframes, these checklists provide organizations with the necessary procedures for conducting a Security Readiness Review (SRR) prior to, or as part of, a formal security audit.

To increase automation and thus reduce cost, Vanguard has optimized the DISA STIG checking process with their Configuration Manager solution. Configuration Manager can perform Intrusion Detection DISA STIG checks and report findings in just a few hours instead of the hundreds or thousands of hours it may take using standard methods. Potentially, Configuration Manager enables organizations to easily evolve from continuous monitoring to periodic compliance reporting.

Maintaining tight control over the security audit and compliance process is a critical imperative for today’s enterprises. To comply, enterprises must show that they have implemented procedures to prevent unauthorized users from accessing corporate and personal data. Even if enterprises have the means to efficiently conduct audits, they often lack the tools necessary to prevent policy and compliance violations from reoccurring. As a result, security vulnerabilities remain a constant threat, exposing companies to potential sanctions and erode the confidence of investors and customers.

As a result, the process of meeting compliance standards such as those found in the Combined Code issued by the London Stock Exchange (LSE) and the Turnbull Guidance (the Sarbanes-Oxley equivalent for publicly traded companies in the UK), the Data Protection Act 1998 (and, for the public sector, the Freedom of Information Act 2000), the regulations promulgated by the Financial Services Authority (FSA) (the FSA has oversight over the various entities that make up the financial services industry), standards set by Basel II, the Privacy and Electronic Communications Regulations of 2003, the HMG (UK Government) Security Policy Framework, the Payment Card Industry Data Security Standard (PCI DSS) and various UK criminal and civil laws, represents one of IT’s most critical investments.

As a consequence, managing security in the Mainframe environment is becoming an increasingly difficult task as the list of challenges grows longer every day. Even the most experienced Security Administrators can labour under the workload as security systems increase in size and networks grow in density.

So where does the Mainframe Security Administrator start to make sense of how to achieve security compliance for their particular business?

Although there are many security and compliance regulatory requirements with supporting policy frameworks, none of these high-level mandates actually drill-down to the technical level and thus provide RACF or equivalent Mainframe (ACF2, Top Secret) policy guidelines!

There are synergies between various global organizations that define security standards. This is certainly true for NIST and ISO/IEC. Page viii of the NIST SP-800-53 policy states:

NIST is also working with public and private sector entities to establish specific mappings and relationships between the security standards and guidelines developed by NIST and the International Organization for Standardization and International Electrotechnical Commission (ISO/IEC) 27001, Information Security Management System (ISMS).

Furthermore, the seemingly ubiquitous Annex A of ISO 27001 is also cross-referenced by this NIST SP-800-53 policy, and the various controls and monitoring points are cross-referenced, where largely, the requirements of the NIST standard are mapped in the ISO/SEC standard, and vice versa. Please refer to Appendix H of the NIST SP-800-53 policy, which cross-references NIST SP-800-53 with ISO/IEC 27001 (Annex A) controls.

Put simply, one must draw one’s own conclusions as to the robustness of NIST SP-800-53 vs. ISO/IEC 27001, but both security standards seem to have commonality as robust and acceptable security standards. So maybe Mainframe users all over the world can define and deploy a generic and robust baseline Mainframe technical security policy, vis-à-vis, the NIST DISA STIG checklists…

We must also recognize the IBM Health Checker for z/OS, which also has the ability to perform automated RACF policy checks. This facility includes some ready-to-go checks for standard system services, in conjunction with a facility that allows the user to define their own policy checking rules. Without doubt, the IBM Health Checker for z/OS is a worthy resource that should be leveraged from, but for RACF, if each Mainframe user defines their own policy checking rules, maybe there is the possibility for a significant duplication of effort. For the avoidance of doubt, although RACF resource naming standards might be unique to each and every Mainframe user, there is a commonality of ISV (E.g. ASG, BMC, CA, Compuware, IBM, SAS, et al) software subsystems and products they deploy. If only we could all benefit from previous lessons learned by “standing on the shoulders of giants”!

Perhaps the realm of opportunity exists. There are many prominent Mainframe security giants actively involved today, including the authors of ACF2, Vanguard Software, Consul (Tivoli zSecure), naming but a few. Is it possible that there could be one common standard that might be used as a technical policy template, based upon the ubiquitous 80/20 rule? So deploying this baseline would deliver 80% of the work required for 20% of the effort, where the unique Mainframe customer just customizes this policy as per the resource naming standards in their Mainframe Data Centre? Equally the user has the ability to contribute to this template, perhaps using a niche software product not commonly used that requires security policy checks, where said software product is deployed in maybe tens of Mainframe customers globally.

Vanguard clearly has put a lot of effort into evolving the DISA STIG resource for Mainframe, IBM also has their RACF Health Checker, but what about one overseeing independent organization, which could benefit from the experience of Mainframe security specialists, and moreover, real-life field experience from Mainframe users globally, implementing and refining these standards. Wasn’t that the essence and spirit of Guide and SHARE several decades ago, listening to Mainframe users and evolving Mainframe technology accordingly? Of course SHARE in The USA and Guide Share in Europe, IUGC and APUGC in the APAC region still perform this function admirably, but seemingly with the NIST DISA STIG resource, we already have a great baseline to leverage from.

What is the size and shape of this potential task, ideally to identify each z/OS software product that has specific interaction with the security subsystem (I.E. ACF2, RACF, Top Secret), typically via resource profiles? For a software z/OS product to be developed, the ISV will have interacted with IBM, initially via their PartnerWorld resource for product development, and eventually via the IBM Global Solutions Directory from a Marketing viewpoint. As of Q4 2012, the IBM Global Solutions Directory contains ~1,800 ISV’s with z/OS based software products.

However, recognizing there are already good security resource checklist templates in existence, vis-à-vis the solid foundation primarily provided by Vanguard via the DISA STIG checklists, the best organization to add to these DISA STIG checklists are the ISV’s themselves. The ISV has most knowledge about their product, having written the code and supporting documentation for security related control; so a modicum of effort from the ISV that has product specific security resource checking seems the best way forward.

In 2003 IBM launched a Mainframe Charter initiative, demonstrating their commitment to the Mainframe platform, where they adopted nine principles organized under the pillars of innovation, value, and community. Although this was an IBM initiative, wouldn’t it be great for the Mainframe ISV to proactively be part of this global Mainframe community, and assist their Mainframe customers simplify the activity of implementing and monitoring their Mainframe security technical policy? Not every ISV will have software products with specific security resource interaction, and therefore not every product in the ISV software portfolio will require a security checklist. The amount of work per software product to create a template might only be several hours, and so could the ISV produce these checklists as part of their day-to-day customer support activities?

Is it possible that globally, we can all participate and collaborate in a focussed and Mainframe security centric group, to define a technical policy template that will assist all Mainframe customers satisfy regulatory compliance mandates? No one of us is as good as all of us…