It wasn’t too long ago that the maximum size of a 3390 DASD volume was ~54 GB (65,520 Cylinders) via the 3390-54. Then with the release of z/OS 1.10, Extended Address Volumes (EAV) were introduced, and a ~400% increase in single device capacity was delivered @ 223 GB (262,668 Cylinders)! Surely enough storage capacity for anybody?
Of course, we all know that 21st Century data requirements are significant, and so the release of z/OS 1.13 (z/OS 1.12 and PTFs) has delivered another ~400% increase, with a single device capacity of 1 TB (~1.182 Million Cylinders). However, let’s not forget, data storage capacity can increase by ~20%+ per annum, I guess it won’t be too long before we see another 400%+ increase in size, ~4 TB+…
EAV implementation relieves disk capacity constraints and allows storage growth without adding more devices. In today’s world of TCO optimization and a utopia of very short-term ROI, EAV usage will reduce TCO, primarily personnel and environmental (E.g. Power, Cooling, Floor Space) related. Potentially the ability to manage more data with fewer DASD volumes simplifies the Storage Administration process, therefore increasing the number of TB managed by each technician. Typically, additional capacity (EAV) can be added dynamically, increasing DASD volume capacity online via the Dynamic Volume Expansion (DVE) function.
Theoretically (as per current architectural constraints) a 3390 EAV can grow to 225 TB; the realm of possibility exists!
The pros of EAV implementation seem obvious, a significant capacity increase in a single footprint, easy implementation, with demonstrable TCO benefits; but is all that glisters always gold?
Learning from history is always a good thing and if we consider the challenges of adopting the 3390-9/27/54 device, did we encounter any capacity optimization issues? As a single device increases in size, device occupancy might become a challenge. For example, 90% occupancy of a 3390-54 @ 54 GB is ~48.5 GB, or put another way, ~5.4 GB is allocated but never used. So if we apply the same metric to a 1 TB device, you guessed it, ~100 GB is allocated and never used…
So what they say. Indeed the separation of the physical and logical device eliminates any physical space utilization considerations, but what about the number of data sets and more importantly extents on that EAV or even 3390-54 DASD volume? An issue that has plagued many Mainframe installations is disk fragmentation, as no matter how big a DASD volume, sometimes successful data set allocation is dependent upon sufficient contiguous extents to satisfy primary allocation or secondary extension.
At first glance, the process of defragmentation is very simple, DFSMSdss DEFRAG, FDR/CPK COMPAKTOR, et al, but typically these processes require minimal data set allocation activity and are batch orientated. DASD enqueue time is a consideration, as these traditional Mainframe defrag solutions can generate significant enqueue activity for the VTOC and data sets alike. Can the 21st Century business that requires near 24*7 data availability allocate sufficient time (E.g. minimal processing window) to perform such manual defragmentation activities? If only defragmentation could be transparent, automated and dynamic…
RealTime Defrag (RTD) is such an option that deploys a multi-faceted approach to delivering “on-line defrag”:
- Release – Release allocated but unused space for all data set types
- Combine – Combine extents, reducing the number of allocated extents for optimized performance and SE37 abend eradication
- Defrag – Reorganize data sets into contiguous groups, increasing size of free extents, optimizing performance and SB37 abend eradication
In conclusion, EAV deployment can only be a good thing, delivering demonstrable TCO benefits in the form of dramatic single-footprint (I.E. Disk Subsystem) capacity increases. RealTime Defrag can also increase service availability, eradicating the requirement for manual and batch orientated defrag activities, while safeguarding that installed disk capacity is optimized, EAV or not.