Worldwide, a large number of IT systems are still running on Mainframe platforms
that besides being very efficient and fault tolerant, present a very substantial cost to operate.
Legacy infrastructure is burdened with ever-increasing workloads and corresponding pressures to
increase its capacity (*). A part of that is a result of the continuously growing demand on many
mainframe applications due to opening up to the web and new functionality being added.
Most hardware and software packages in the Mainframe world are licensed based on the size of
the machine, or the number of MIPS (Million Instructions per Second). Based on analysts’ reports,
the total cost per each MIPS varies between $3,000 and $5,000 per year (**) ,
including hardware and software costs. Consequently, IT departments everywhere must
squeeze greater operational efficiencies out of their systems or face spiraling costs.
Performance improvement must be an ongoing process.
Changes to the application, the infrastructure and especially to the data may have unforeseen impacts on performance levels.
Performance may be acceptable most of the time but because operational costs are often tied to peak period consumption,
high resource utilization for even short periods of time may have a significant effect on actual costs. Additionally, even
if batch windows are missed only on rare occasions or transactions take very long to complete only during a certain period,
this is usually unacceptable and requires added resources.
A significant flaw in current solutions is the dependence of monitoring on thresholds and the definition of
the status quo as the accepted baseline. Sub-optimal index structures, poorly written SQL or mismatched SQL
and indices may be covered and accounted for by hardware and processing time allocations and will not show up on
monitoring screens if they don’t violate an arbitrary threshold. This results in missed opportunities for significant savings.
The last but not least challenge is expertise. The number of professionals
that deeply understand the legacy systems and their technologies is shrinking rapidly
while professional performance optimization requires this depth and experience, as a result of all the challenges mentioned above.
Keeping this deep knowledge and using experience collected in other customers’ optimization projects are two key remedies that can
help Mainframe customers deal with the knowledge drain.