Browsing articles tagged with " mainframe"

The Little Mainframe That Could

Aug 23, 2013   //   by admin   //   Blog  //  No Comments

RFG Perspective: The just-launched IBM Corp. zEnterprise BC12 servers are very competitive mainframes that should be attractive to organizations with revenues in excess of, or expanding to, $100 million. The entry level mainframes that replace last generation's z114 series can consolidate up to 40 virtual servers per core or up to 520 in a single footprint for as low as $1.00 per day per virtual server. RFG projects that the zBC12 ecosystem could be up to 50 percent less expensive than comparable all-x86 distributed environments. IT executives running Java or Linux applications or eager to eliminate duplicative shared-nothing databases should evaluate the zBC12 ecosystem to see if the platform can best meet business and technology requirements.

Contrary to public opinion (and that of competitive hardware vendors) the mainframe is not dead, nor is it dying. In the last 12 months the zEnterprise mainframe servers have extended growth performance for the tenth straight year, according to IBM. The latest MIPS (millions of instructions per second) installed base jumped 23 percent year-over-year and revenues jumped 10 percent. There have been 210 new accounts since the zEnterprise launch as well as 195 zBX units shipped. More than 25 percent of all MIPS are IFLs, specialty engines that run Linux only, and three-fourths of the top 100 zEnterprise customers have IFLs installed. The ISV base continues to grow with more than 7,400 applications available and more than 1,000 schools in 67 countries participate in the IBM Academic Initiative for System z. This is not a dying platform but one gaining ground in an overall stagnant server market. The new zBC12 will enable the mainframe platform to grow further and expand into lower-end markets.

zBC12 Basics

The zBC12 is faster than the z114, using a 4.2GHz 64-bit processor and has twice the maximum memory of the z114 at 498 GB. The zBC12 can be leased starting at $1,965 a month, depending upon the enterprise's credit worthiness, or it can be purchased starting at $75,000. RFG has done multiple TCO studies on the zEnterprise Enterprise Class server ecosystems and estimates the zBC12 ecosystem could be 50 percent less expensive than x86 distributive environments having the equivalent computing power.

On the analytics side, the zBC12 offers the IBM DB2 Analytics Accelerator that IBM says offers significantly faster performance for workloads such as Cognos and SPSS analytics. The zBC12 also attaches to Netezza and PureData for Analytics appliances for integrated, real-time operational analytics.

Cloud, Linux and Other Plays

On the cloud front, IBM is a key contributor to OpenStack, an open and scalable operating system for private and public clouds. OpenStack was initially developed by RackSpace Holdings and currently has a community of more than 190 companies supporting it including Dell Inc., Hewlett-Packard Co. (HP), IBM, and Red Hat Inc. IBM has also added its z/VM Hypervisor and z/VM Operating System APIs for use with OpenStack. By using this framework, public cloud service providers and organizations building out their own private clouds can benefit from zEnterprise advantages such as availability, reliability, scalability, security and costs.

As stated above, Linux now accounts for more than 25 percent of all System z workloads, which can run on zEnterprise systems with IFLs or on a Linux-only system. The standalone Enterprise Linux Server (ELS) uses the z/VM virtualization hypervisor and has available more than 3,000 tested Linux applications. IBM provides a number of specially-priced zEnterprise Solution Editions, including the Cloud-Ready for Linux on System z, which turns the mainframe into an Infrastructure-as-a-Service (IaaS) platform. Additionally, the zBC12 comes with EAL5+ security, which satisfies the high levels of protection on a commercial server.

The zBC12 is an ideal candidate for mid-market companies to act as the primary data server platform. RFG believes organizations will save up to 50 percent of their IT ecosystem costs if the mainframe handles all the data serving, since it provides a shared-everything data storage environment. Distributed computing platforms are designed for shared-nothing data storage, which means duplicate databases must be created for each application running in parallel. Thus, if there are a dozen applications using the customer database, then there are 12 copies of the customer file in use simultaneously. These must be kept in sync as best as possible. The costs for all the additional storage and administration can make the distributed solution more costly than the zBC12 for companies with revenues in excess of $100 million. IT executives can architect the systems as ELS only or with a mainframe central processor, IFLs and zBX for Microsoft Corp. Windows applications, depending on the configuration needs.

Summary

The mainframe myths have misled business and IT executives into believing mainframes are expensive and outdated, and led to higher data center costs and sub-optimization for mid-market and larger companies. With the new zEnterprise BC12 IBM has an effective server platform that can counter the myths and provide IT executives with a solution that will help companies contain costs, become more competitive, and assist with a transformation to a consumption-based usage model.

RFG POV: Each server platform is architected to execute certain types of application workloads well. The BC12 is an excellent server solution for applications requiring high availability, reliability, resiliency, scalability, and security. The mainframe handles mixed workloads well, is best of breed at data serving, and can excel in cross-platform management and performance using its IFLs and zBX processors. IT executives should consider the BC12 when evaluating platform choices for analytics, data serving, packaged enterprise applications such as CRM and ERP systems, and Web serving environments.

Mainframe Survey – Future is Bright

Jan 9, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

According to the 2012 BMC Software Inc. survey of mainframe users, the mainframe continues to be their platform of choice due to its superior availability, security, centralized data serving and performance capabilities. It will continue to be a critical business tool that will grow driven by the velocity, volume, and variety of applications and data.

Focal Points:

  • According to 90 percent of the 1,243 survey respondents the mainframe is considered to be a long-term solution, and 50 percent of all respondents agreed it will attract new workloads. Asia-Pacific users reported the strongest outlook, as 57 percent expect to rely on the mainframe for new workloads. The top three IT priorities for respondents were keeping IT costs down, disaster recovery, and application modernization.  The top priority, keeping costs down, was identified by 69 percent of those surveyed, up from 60 percent from 2011. Disaster recovery was unchanged at 34 percent while application modernization was selected by 30 percent, virtually unchanged as well. Although availability is considered a top benefit of the mainframe, 39 percent of respondents reported an unplanned outage; however, only 10 percent of organizations stated they experienced any impact from an outage. The primary causes of outages were hardware failures (31 percent), system software failure (30 percent), in-house application failure (28 percent), and change process failure (22 percent).
  • 59 percent of respondents expect MIPS capacity to grow as they modernize and add applications to address business needs. The top four factors for continued investment in the mainframe were platform availability advantage (74 percent), security strengths (7o percent), superior centralized data server (68 percent), and transaction throughput requirements best suited to a mainframe (65 percent). Only 29 percent felt that the costs of migration were too high or use of alternative solutions did not have a reasonable return on investment (ROI), up from 26 percent the previous two years.
  • There remains a continued concern about the shortage of skilled mainframe staff. Only about a third of respondents were very concerned about the skills issues, although at least 75 percent of those surveyed expressed some level of concern. The top methods being used to address the skills shortage are training internally (53 percent), hire experienced staff (40 percent), outsource (37 percent) and automation (29 percent). Additionally, more than half of the respondents stated the mainframe must be incorporated into the enterprise management processes. Enterprises are recognizing the growing complexity of the hybrid data center and the need for simple, cross-platform solutions.

RFG POV: Some things never change – mainframes still are predominant in certain sectors and will continue to be so over the visible horizon, and yet the staffing challenges linger. 20 years after mainframes were declared dinosaurs they remain valuable platforms and growing. In fact, mainframes can be the best choice for certain applications and data serving, as they effectively and efficiently deal with the variety, velocity, veracity, volume, and vulnerability of applications and data while reducing complexity and cost. RFG's latest study on System z as the lowest cost database server (http://lnkd.in/ajiUrY ) shows the use of the mainframe can cut the costs of IT operations around 50 percent. However, with Baby Boomers becoming eligible for retirement, there is a greater concern and need for IT executives to utilize more automated, self-learning software and implement better recruitment, training and outsourcing programs. IT executives should evaluate mainframes as the target server platform for clouds, secure data serving, and other environments where zEnterprise's heterogeneous server ecosystem can be used to share data from a single source, and optimize capacity and performance at a low-cost.

Blog: Green Data Centers an Oxymoron

Nov 30, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

The New York Times published "Power, Pollution and the Internet," an article on the dark side of data centers. The report, which was the result of a yearlong investigation, highlights the facts related to the environmental waste and inefficiencies that can be found in the vast majority of data centers around the world. RFG does not contest the facts as presented in the article but the Times failed to fully recognize all the causes that led to today's environment and the use of poor processes and practices. Therefore, it can only be partially fixed – cloud computing notwithstanding – until there is a true transformation in culture and mindset.

New York Times Article

The New York Times enumerated the following energy-related facts about data centers:

  • Most data centers, by design, consume vast amounts of energy
  • Online companies run their facilities 24x7 at maximum capacity regardless of demand
  • Data centers waste 90 percent or more of the electricity they consume
  • Worldwide digital warehouses use about 30 billion watts of energy; U.S. accounts for 25 to 33 percent of the load
  • McKinsey & Company found servers use only six to 12 percent of their power consumption on real work, on average; the rest of the time the servers are idle or in standby mode
  • International Data Corp. (IDC) estimates there are now more than three million data centers of varying sizes worldwide
  • U.S. data centers use about 76 billion kWh in 2010, or roughly two percent of all electricity used in the country that year, according to a study by Jonathan G. Koomey.
  • A study by Viridity Software Inc. found in one case where of 333 servers monitored, more than half were "comatose" – i.e., plugged in, using energy, but doing little if any work. Overall, the company found nearly 75 percent of all servers sampled had a utilization of less than 10 percent.
  • IT's low utilization "original sin" was the result of relying on software operating systems that crashed too much. Therefore, each system seldom ran more than one application and was always left on.
  • McKinsey's 2012 study currently finds servers run at six to 12 percent utilization, only slightly better than the 2008 results. Gartner Group also finds the typical utilization rates to be in the seven to 12 percent range.
  • In a typical data center when all power losses are included – infrastructure and IT systems – and combined with the low utilization rates, the energy wasted can be as much as 30 times the amount of electricity used for data processing.
  • In contrast the National Energy Research Scientific Computing Center (NERSCC), which uses server clusters and mainframes at the Lawrence Berkeley National Laboratory (LBNL), ran at 96.4 percent utilization in July.
  • Data centers must have spare capacity and backup so that they can handle traffic surges and provide high levels of availability. IT staff get bonuses for 99.999 percent availability, not for savings on the electric bill, according to an official at the Electric Power Research Institute.
  • In the Virginia area data centers now consume 500 million watts of electricity and projections are that this will grow to one billion over the next five years.
  • Some believe the use of clouds and virtualization may be a solution to this problem; however, other experts disagree.

Facts, Trends and Missed Opportunities

There are two headliners in the article that are buried deep within the text. The "original sin" was not relying on buggy software as stated. The issue is much deeper than that and it was a critical inflection point. And to prove the point the author states the NERSCC obtains utilization rates of 96.4 percent in July with mainframes and server clusters. Hence, the real story is that mainframes are a more energy efficient solution and the default option of putting workloads on distributed servers is not a best practice from a sustainability perspective.

In the 1990s the client server providers and their supporters convinced business and IT executives that the mainframe was dead and that the better solution was the client server generation of distributed processing. The theory was that hardware is cheap but people costs are expensive and therefore, the development productivity gains outweighed the operational flaws within the distributed environment. The mantra was unrelenting over the decade of the 90s and the myth took hold. Over time the story evolved to include the current x86-architected server environment and its operating systems. But now it is turning out that the theory – never verified factually – is falling apart and the quick reference to the 96.4 percent utilization achieved by using mainframes and clusters exposes the myth.

Let's take the key NY Times talking points individually.

  • Data centers do and will consume vast amounts of energy but the curve is bending downward
  • Companies are beginning to learn to not run their facilities at less than maximum capacity. This change is relatively new and there is a long way to go.
  • Newer technologies – hardware, software and cloud – will enable data centers to reduce waste to less than 20 percent. The average data center today more than half of their power consumption on non-IT infrastructure. This can be reduced drastically. Moreover, as the NERSCC shows, it is possible to drive utilization to greater than 90 percent.
  • The multiple data points that found the average server utilization to be in the six to 12 percent range demonstrated the poor utilization enterprises are getting from Unix and Intel servers. Where virtualization has been employed, the utilization rates are up but they still remain less than 30 percent on average. On the other hand, mainframes tend to operate at the 80 to 100 percent utilization level. Moreover, mainframes allow for shared data whereas distributed systems utilize a shared-nothing data model. This means more copies of data on more storage devices which means more energy consumption and inefficient processes.
  • Comatose servers are a distributed processing phenomenon, mostly with Intel servers. Asset management of the huge server farms created by the use of low-cost, single application, scale-out hardware is problematic. The complexity caused by the need for orchestration of the farms has hindered management from effectively managing the data center complex. New tools are constantly coming on board but change is occurring faster than the tools can be applied. As long as massive single-application server farms exist, the problem will remain.
  • Power losses can be reduced from 30 times that used to less than 1.5 times.
  • The NERSCC utilization achievement would not be possible without mainframes.
  • Over the next five years enterprises will learn how to reduce the spare capacity and backup capabilities of their data centers and rely upon cloud services to handle traffic surges and some of their backup/disaster recovery needs.
  • Most data center staffs are not measured on power usage as most shops do not allocate those costs to the IT budget. Energy consumption is usually charged to facilities departments.
  • If many of the above steps occur, plus use of other processes such as the lease-refresh-scale-up delivery model (vs the buy-hold-scale-out model) and the standardized operations platform model (vs development selected platform model), then the energy growth curve will be greatly abated, and could potentially end up using less power over time.

Operations standard platforms (cloud)

Greater standardization and reduced platform sprawl but more underutilized systems

Least cost

Development selected platforms

Most expensive

Greater technical currency with platform islands and sprawl

Model philosophies

Buy-hold-scale-out

 

 

Lease-refresh-scale-up

 

  •  Clouds and virtualization will be one solution to the problem but more is needed, as discussed above.

RFG POV: The mainframe myths have persisted too long and have led to greater complexity, higher data center costs, inefficiencies, and sub-optimization. RFG studies have found that had enterprises kept their data on the mainframe while applications were shifted to other platforms, companies would be far better off than they are today. Savings of up to 50 percent are possible. With future environments evolving to processing and storage nodes connected over multiple networks, it is logical to use zEnterprise solutions to simplify the data environment. IT executives should consider mainframe-architected solutions as one of their targeted environments as well as an approach to private clouds. Moreover, IT executives should discuss the shift to a lease-refresh-scale-up approach with their financial peers to see if and how it might work in their shops.

RBS Fiasco – A Harbinger of Things to Come?

Jul 14, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

 

The Royal Bank of Scotland (RBS) group, which includes NatWest and Ulster Bank, recently experienced a massive week-long outage caused by an IT failure. Retail customers were unable to receive or make payments, thereby greatly impacting people's ability to process wages, mortgages, and other transactions; thereby damaging the bank's and people's reputations. The bank's retail customer account system utilizes CA Inc.'s CA-7 batch scheduling software. What should have been a routine procedure and straightforward upgrade fix by operations staff was unintentionally converted into a major catastrophe.

The story is that an operator running the end-of-day overnight batch cycle accidentally erased the entire scheduling queue. This error required the re-entry of the entire queue – a complex process requiring an in-depth understanding of the core system's processes and detailed knowledge of legacy software. All this had to be completed within the overnight batch processing window, which for most firms is tight and leaves little room for error correction and reruns. This proved to be impossible, especially as pent-up demand and payment instructions built up over time in the queue, causing other RBS systems, such as access to its online banking, to be out of service. Eventually RBS had to rerun the previous day's transactions before new ones could be inputted into the system. The delays and backlog of up to 100 million transactions fed upon themselves extending the outage over multiple days.

RFG notes that many observers pointed the finger at the bank's legacy mainframe systems – both the hardware and software. However, RFG believes this is not the real story. The vast majority of banks run their retail customer account systems using mainframes and legacy software every day and this is a rare event. RBS runs on System z servers, so one cannot claim it is using ancient iron that is outdated.

The real culprits are the bank's processes and personnel management. The multi-year banking crisis that RBS (and others) went through caused the firm to undertake cost cutting measures over the past few years. IT organizations were not exempt from the staffing actions and many of the IT jobs were outsourced to a team in India. Reports state that the person responsible for the error was part of this team but an RBS executive claims otherwise. Outsourced or not, two things are evident: the staff was inexperienced and not adequately trained for the task, and processes and procedures did not exist to quickly identify the problems and correct them rapidly. The issues here are not technology but people and process.

RFG POV: The RBS business environment is not unique. Because of the financial meltdown that began in 2008, banks, other financial institutions, and enterprises of all types have been forced to slice budgets across multiple years and IT budgets are no exception. For many companies this cost cutting continues. However, it does not mean that IT is no longer accountable and responsible for its actions – it has a fiduciary responsibility to keep the business running regardless of the disaster. RBS did not properly staff and/or train its operations crews and did not have appropriate procedures in place to prevent such a failure. In many organizations the procedures are not well documented and smooth operations are dependent upon the institutional knowledge and skills of senior staff and frequently when there are cuts, these high priced administrators/operators are the first to go. IT executives should proceed cautiously when "rightsizing" staff and ensure that key skills and/or institutional knowledge are not being lost in the process. Documentation tends to be an IT Achilles heel. IT executives need to ensure all procedures are well documented, tested, and staff is fully trained on them. As the proverb goes, an ounce of prevention is worth a pound of cure.