Noting that ~80% of global corporate data is still managed by IBM Mainframes, doesn’t it make sense that processing this mission critical data should remain local, whenever practicable and pragmatic?
Industry Analyst’s estimate that 90%+ of existing IT budget expenditure is expended on the maintenance of existing applications and their supporting infrastructure. A significant factor is the siloed, duplicated and complex nature of these existing IT environments. Repeating this often unnecessary data duplication and processing for big data implementations will only exacerbate this significant TCO expenditure. Therefore it is of primary importance to consider big data from a strategic rather than a purely expedient tactical perspective. Put another way, if big data could be accessed and processed by the incumbent IBM Mainframe environment, why create another silo environment, requiring more servers, storage, software and associated maintenance expenditure?
It is estimated that each and every day another ~2.5 Exabyte’s (2.5 quintillion bytes) of data is created, meaning that ~90% of electronic data stored, has been created in the last two years alone. This data comes from numerous sources, largely Internet and mobile telephony based, including social media sources, digital pictures and videos, financial transaction records, cell phone generated, naming but a few.
Industry Analyst’s estimate that only ~1% of global data is currently analysed, leaving massive scope for growth in this functional area, namely big data analytics. Obviously this scope dictates exponential and arguably uncontrolled growth in deployment of big data analytics solutions, generating significant risk that big data projects will lack management oversight, spiralling out of control from a cost viewpoint.
It therefore follows that big data initiatives require careful and strategic planning, not only for short-term immediate requirements, but also for future big data projects that can already be perceived and forecasted. Moreover, in addition, there needs to be a strategic, scalable, cost efficient and secure infrastructure in place, managing the interrelationship and interdependencies, between mission critical data stored on the IBM Mainframe and big data being created from Internet and mobile technologies.
Without such a diligent and structured management framework, IT infrastructure expenditure costs (TCO) will increase, efficiency reduce, with the inevitable consequence of siloed environments, with duplication of resources, namely servers, software, storage, et al. As always, we must always apply lessons learned from past experiences to avoid these inefficiencies.
Hadoop is seemingly the big data buzzword, being an open source software framework for storing and processing big data in a distributed environment on large clusters of commodity hardware. Ultimately Hadoop delivers two primary functions, massive data storage and faster in memory I/O processing.
In conclusion, the underlying question remains, can mission critical IBM Mainframe data be “coupled” with big data, typically originating from Internet and mobile platforms, to deliver an integrated single image view of customer and/or product data, for business benefit?
IBM offers an integrated solution, namely the zEnterprise Analytics System (I.E. 9700, 9710), comprising hardware (E.g. z196/zEC12 or z114/zBC 12 Server plus DS8870 Disk) and software (E.g. Optimized z/OS software stack), combined with optional services. Primarily data analytics is delivered by the IBM DB2 Analytics Accelerator solution, incorporating Netezza 1000 product function, allowing for intelligent and rapid in-memory data analytics via the DB2 RDBMS. Therefore existing zSeries Mainframe customers can supplement their current IBM Mainframe infrastructure with the IBM DB2 Analytics Accelerator solution, while the realm of possibility exists for a zSeries Mainframe to be deployed for new workloads, via the zEnterprise Analytics System.
Resource and cost efficiencies are delivered by combining z/OS and Linux on zEnterprise solutions. Data transfer is reduced by keeping data analytics in the same environment as the mission critical source data (I.E. z/OS) using hypersockets to process the data between the IBM z/OS and Linux on zEnterprise systems. Overall TCO efficiencies are delivered by optimizing lower cost Linux on zEnterprise systems resources, where for Sub Capacity z/OS customers, no software charges will be incurred for associated CPU processing. Therefore leveraging from existing zEnterprise infrastructure resources, including people and processes to deploy and support expanding data analytics requirements.
zSeries Mainframe big data analytics solutions, whether via the packaged zEnterprise Analytics System or via the IBM DB2 Analytics solution deliver benefits including:
- Optimized I/O Processing: Reducing the complexity and cost of data storage and associated processing by bringing data transformation and analytic processes to the data origin (I.E. zSeries Mainframe)
- Enterprise Wide Data Availability: Safeguarding operational data accessibility to many users in a timely and cost efficient manner without impacting core business processes
- Near Real Time Data Processing: Delivering near real time operational analytics with minimal latency and superior Quality of Service (QoS) attributes (I.E. RAS – Reliability, Availability, Serviceability)
Syncsort also provide their DMX-h ETL solution to integrate IBM mainframe data with Hadoop technologies. Syncsort DMX-h ETL incorporates a library of Use Case Accelerators to implement common ETL tasks including Mainframe data access, change data capture (CDC), joins, web log aggregations, et al. Implementing a more traditional ETL approach, offloading big data batch workload from the Mainframe to Hadoop platforms, reducing Mainframe MIPS accordingly. Obviously ETL solutions have a long-term history, typically associated with Business Intelligence, Data Warehouse, et al. One must draw one’s own conclusions as to whether ETL solutions contribute to the complexity and cost of managing mission critical business data…
From a business viewpoint, big data analytics delivers benefits, including but not limited to:
- Optimized & Faster Decision Making: Performing real time analysis of customer transaction and activity data, feedback (E.g. survey and experience) data, et al, can dramatically reduce customer attrition, maintaining existing customer loyalty, applying these lessons learned for attracting new customers.
- New Products & Services: Customer’s and associated market research have always provided valuable insight into driving innovation, but these traditional processes are time consuming and error prone. Rapidly analysing real life customer data from Internet and mobile sources, delivers an opportunity to offer a new product and/or service, seemingly specialized to their personal individual requirements.
- Cost Reduction: Performed well, clearly big data analytics can deliver significant cost reduction for the business, reducing product/service development time, while retaining existing customers and attracting new customers. However, done badly, data analytics could be a significant drain on the IT expenditure budget
As always, the zSeries Mainframe delivers an integrated, scalable, secure and cost efficient solution for big data initiatives, even Hadoop, typically perceived as a Distributed Systems solution. Without doubt, big data solutions will be implemented by each and every major global company in the short-term, while pragmatic and careful planning will reduce the associated IT implementation and administration cost. With a legacy of several decades or more delivering enterprise wide solutions, arguably seasoned IBM Mainframe personnel are ideally placed to participate in the design and delivery of big data analytics projects!