Browsing articles tagged with " availability"

The Little Mainframe That Could

Aug 23, 2013   //   by admin   //   Blog  //  No Comments

RFG Perspective: The just-launched IBM Corp. zEnterprise BC12 servers are very competitive mainframes that should be attractive to organizations with revenues in excess of, or expanding to, $100 million. The entry level mainframes that replace last generation's z114 series can consolidate up to 40 virtual servers per core or up to 520 in a single footprint for as low as $1.00 per day per virtual server. RFG projects that the zBC12 ecosystem could be up to 50 percent less expensive than comparable all-x86 distributed environments. IT executives running Java or Linux applications or eager to eliminate duplicative shared-nothing databases should evaluate the zBC12 ecosystem to see if the platform can best meet business and technology requirements.

Contrary to public opinion (and that of competitive hardware vendors) the mainframe is not dead, nor is it dying. In the last 12 months the zEnterprise mainframe servers have extended growth performance for the tenth straight year, according to IBM. The latest MIPS (millions of instructions per second) installed base jumped 23 percent year-over-year and revenues jumped 10 percent. There have been 210 new accounts since the zEnterprise launch as well as 195 zBX units shipped. More than 25 percent of all MIPS are IFLs, specialty engines that run Linux only, and three-fourths of the top 100 zEnterprise customers have IFLs installed. The ISV base continues to grow with more than 7,400 applications available and more than 1,000 schools in 67 countries participate in the IBM Academic Initiative for System z. This is not a dying platform but one gaining ground in an overall stagnant server market. The new zBC12 will enable the mainframe platform to grow further and expand into lower-end markets.

zBC12 Basics

The zBC12 is faster than the z114, using a 4.2GHz 64-bit processor and has twice the maximum memory of the z114 at 498 GB. The zBC12 can be leased starting at $1,965 a month, depending upon the enterprise's credit worthiness, or it can be purchased starting at $75,000. RFG has done multiple TCO studies on the zEnterprise Enterprise Class server ecosystems and estimates the zBC12 ecosystem could be 50 percent less expensive than x86 distributive environments having the equivalent computing power.

On the analytics side, the zBC12 offers the IBM DB2 Analytics Accelerator that IBM says offers significantly faster performance for workloads such as Cognos and SPSS analytics. The zBC12 also attaches to Netezza and PureData for Analytics appliances for integrated, real-time operational analytics.

Cloud, Linux and Other Plays

On the cloud front, IBM is a key contributor to OpenStack, an open and scalable operating system for private and public clouds. OpenStack was initially developed by RackSpace Holdings and currently has a community of more than 190 companies supporting it including Dell Inc., Hewlett-Packard Co. (HP), IBM, and Red Hat Inc. IBM has also added its z/VM Hypervisor and z/VM Operating System APIs for use with OpenStack. By using this framework, public cloud service providers and organizations building out their own private clouds can benefit from zEnterprise advantages such as availability, reliability, scalability, security and costs.

As stated above, Linux now accounts for more than 25 percent of all System z workloads, which can run on zEnterprise systems with IFLs or on a Linux-only system. The standalone Enterprise Linux Server (ELS) uses the z/VM virtualization hypervisor and has available more than 3,000 tested Linux applications. IBM provides a number of specially-priced zEnterprise Solution Editions, including the Cloud-Ready for Linux on System z, which turns the mainframe into an Infrastructure-as-a-Service (IaaS) platform. Additionally, the zBC12 comes with EAL5+ security, which satisfies the high levels of protection on a commercial server.

The zBC12 is an ideal candidate for mid-market companies to act as the primary data server platform. RFG believes organizations will save up to 50 percent of their IT ecosystem costs if the mainframe handles all the data serving, since it provides a shared-everything data storage environment. Distributed computing platforms are designed for shared-nothing data storage, which means duplicate databases must be created for each application running in parallel. Thus, if there are a dozen applications using the customer database, then there are 12 copies of the customer file in use simultaneously. These must be kept in sync as best as possible. The costs for all the additional storage and administration can make the distributed solution more costly than the zBC12 for companies with revenues in excess of $100 million. IT executives can architect the systems as ELS only or with a mainframe central processor, IFLs and zBX for Microsoft Corp. Windows applications, depending on the configuration needs.

Summary

The mainframe myths have misled business and IT executives into believing mainframes are expensive and outdated, and led to higher data center costs and sub-optimization for mid-market and larger companies. With the new zEnterprise BC12 IBM has an effective server platform that can counter the myths and provide IT executives with a solution that will help companies contain costs, become more competitive, and assist with a transformation to a consumption-based usage model.

RFG POV: Each server platform is architected to execute certain types of application workloads well. The BC12 is an excellent server solution for applications requiring high availability, reliability, resiliency, scalability, and security. The mainframe handles mixed workloads well, is best of breed at data serving, and can excel in cross-platform management and performance using its IFLs and zBX processors. IT executives should consider the BC12 when evaluating platform choices for analytics, data serving, packaged enterprise applications such as CRM and ERP systems, and Web serving environments.

HP Cloud Services, Cloud Pricing and SLAs

Jan 9, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Hewlett-Packard Co. (HP) announced the HP Cloud Compute made generally available in Dec. 2012 while the HP Cloud Block Storage cloud entered beta at that time. HP claims its Cloud Compute has an industry leading availability service level agreement (SLA) of 99.95 percent. Amazon Inc.'s S3 and Microsoft Corp.'s Windows Azure clouds reduced their storage pricing.

Focal Points:

  • HP released word that the HP Cloud Compute moved to general availability on Dec. 5, 2012 and will offer a 99.95 percent monthly SLA (a maximum of 22 minutes of downtime per month). The company extended the 50 percent discount on pricing until January. The HP Compute cloud is designed to allow businesses of all sizes to move their production workloads to the cloud. There will be three separate availability zones (AZs) per region. It supports Linux and Windows operating systems and comes in six different instance sizes, with prices starting at $0.04/hour. HP is currently supporting Fedora, Debian, CentOS, and Ubuntu Linuxes, but not Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES). On the Windows side, HP is live with Windows Server 2008 SP2 and R2 while Windows Server 2012 is in the works. There are sites today on the East and West coasts of the U.S. with a European facility operational in 2013. Interestingly, HP built its cloud using ProLiant servers running OpenStack and not CloudSystem servers. Meanwhile, HP's Cloud Block Storage moved to public beta on Dec. 5, 2012; customers will not be charged until January at which time pricing will be discounted by 50 percent. Users can create custom storage volumes from 1 GB to 2 TB. HP claims high availability for this service as well and claims each storage volume automatically is replicated within the same availability zone.
  • Amazon is dropping its S3 storage pricing by approximately 25 percent. The first TB/month goes from $0.125 per GB/month to $0.095 per GB/month, a 24 percent reduction. The next 49 TB prices per GB/month fall to $0.080 from $0.110 while the next 450 TB drops from $0.095 to $0.070. This brings Amazon's pricing in line with Google Inc.'s storage pricing. According to an Amazon executive S3 stores well over a trillion objects and services 800,000 requests a second. Prices have been cut 23 times since the service was launched in 2006.
  • In reaction to Amazon's actions Microsoft's Windows Azure storage pricing has again been reduced by up to 28 percent to remain competitive. In March 2012 Azure lowered its storage pricing by 12 percent. Geo-redundant storage has more than 400 miles of separation between replicas and is the default storage mode.

 Google GB/Mo

 Google Storage pricing

 Amazon S3 pricing Amazon GB/mo   Azure storage pricing - geo-redundant

 Azure storage pricing - local-redundant

 First TB

 $0.095

$0.095

 First TB

 $0.095

$0.070

 Next 9 TB

 $0.085

 $0.080

Next 49 TB 

 $0.080

 $0.065

 Next 90 TB

 $0.075

 

 
 Next 400 TB

 $0.070

     

Source: The Register

RFG POV: HP's Cloud Compute offering for production systems is most notable for its 99.95 percent monthly SLA. Most cloud SLAs are hard to understand, vague and contain a number of escape clauses for the provider. For example, Amazon's EC2 SLA guarantees 99.95 percent availability of the service within a region over a trailing 365 day period – i.e., downtime is not to exceed 250 minutes (more than four hours) over the year period. There is no greater granularity, which means one could encounter a four hour outage in a month and the vendor would still not violate the SLA. HP's appears to be stricter; however, in a NetworkWorld articleHP's SLA only applies if customers cannot access any AZs, according to Gartner analyst Lydia Leong. That means customers have to potentially architect their applications to span three or more AZs, each one imposing additional costs on the business. "Amazon's SLA gives enterprises heartburn. HP had the opportunity to do significantly better here, and hasn't. To me, it's a toss-up which SLA is worse," Leong writes. RFG spoke with HP and found its SLA is much better than portrayed in the article. The SLA, it seems, is poorly written so that Leong's interpretation is reasonable (and matches what Amazon requires). However, to obtain credit HP does not require users run their application in multiple AZs – just one, but they must minimally try to run the application in another AZ in the region if the customer's instance becomes inaccessible. The HP Cloud Compute is not a perfect match for mission-critical applications but there are a number of business-critical applications that could take advantage of the HP service. For the record, RFG notes Oracle Corp.'s cloud hosting SLAs are much worse than either Amazon's or HP's. Oracle only offers an SLA of 99.5 percent per calendar month – the equivalent of 2500 minutes or more than 40 hours of outage per month NOT including planned downtime and certain other considerations. IT executives should always scrutinize the cloud provider's SLAs and ensure they are acceptable for the service for which they will be used. In RFG's opinion Oracle's SLAs are not acceptable at all and should be renegotiated or the platform should be removed from consideration. On the cloud storage front overall prices continue to drop 10 percent or more per year. The greater price decreases are due to the rapid growth of storage (greater than 30 percent per year) and the predominance of newer storage arrays versus older ones. IT executives should be considering these prices as benchmarks and working to keep internal storage costs on a similar declining scale. This will require IT executives to retain storage arrays four years or less, and employing tiering and thin provisioning. Those IT executives that believe keeping ancient spinning iron on the data center floor to be the least cost option will be unable to remain competitive against cloud offerings, which could impair the trust relationship with business and finance executives.

Mainframe Survey – Future is Bright

Jan 9, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

According to the 2012 BMC Software Inc. survey of mainframe users, the mainframe continues to be their platform of choice due to its superior availability, security, centralized data serving and performance capabilities. It will continue to be a critical business tool that will grow driven by the velocity, volume, and variety of applications and data.

Focal Points:

  • According to 90 percent of the 1,243 survey respondents the mainframe is considered to be a long-term solution, and 50 percent of all respondents agreed it will attract new workloads. Asia-Pacific users reported the strongest outlook, as 57 percent expect to rely on the mainframe for new workloads. The top three IT priorities for respondents were keeping IT costs down, disaster recovery, and application modernization.  The top priority, keeping costs down, was identified by 69 percent of those surveyed, up from 60 percent from 2011. Disaster recovery was unchanged at 34 percent while application modernization was selected by 30 percent, virtually unchanged as well. Although availability is considered a top benefit of the mainframe, 39 percent of respondents reported an unplanned outage; however, only 10 percent of organizations stated they experienced any impact from an outage. The primary causes of outages were hardware failures (31 percent), system software failure (30 percent), in-house application failure (28 percent), and change process failure (22 percent).
  • 59 percent of respondents expect MIPS capacity to grow as they modernize and add applications to address business needs. The top four factors for continued investment in the mainframe were platform availability advantage (74 percent), security strengths (7o percent), superior centralized data server (68 percent), and transaction throughput requirements best suited to a mainframe (65 percent). Only 29 percent felt that the costs of migration were too high or use of alternative solutions did not have a reasonable return on investment (ROI), up from 26 percent the previous two years.
  • There remains a continued concern about the shortage of skilled mainframe staff. Only about a third of respondents were very concerned about the skills issues, although at least 75 percent of those surveyed expressed some level of concern. The top methods being used to address the skills shortage are training internally (53 percent), hire experienced staff (40 percent), outsource (37 percent) and automation (29 percent). Additionally, more than half of the respondents stated the mainframe must be incorporated into the enterprise management processes. Enterprises are recognizing the growing complexity of the hybrid data center and the need for simple, cross-platform solutions.

RFG POV: Some things never change – mainframes still are predominant in certain sectors and will continue to be so over the visible horizon, and yet the staffing challenges linger. 20 years after mainframes were declared dinosaurs they remain valuable platforms and growing. In fact, mainframes can be the best choice for certain applications and data serving, as they effectively and efficiently deal with the variety, velocity, veracity, volume, and vulnerability of applications and data while reducing complexity and cost. RFG's latest study on System z as the lowest cost database server (http://lnkd.in/ajiUrY ) shows the use of the mainframe can cut the costs of IT operations around 50 percent. However, with Baby Boomers becoming eligible for retirement, there is a greater concern and need for IT executives to utilize more automated, self-learning software and implement better recruitment, training and outsourcing programs. IT executives should evaluate mainframes as the target server platform for clouds, secure data serving, and other environments where zEnterprise's heterogeneous server ecosystem can be used to share data from a single source, and optimize capacity and performance at a low-cost.

Blog: Green Data Centers an Oxymoron

Nov 30, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

The New York Times published "Power, Pollution and the Internet," an article on the dark side of data centers. The report, which was the result of a yearlong investigation, highlights the facts related to the environmental waste and inefficiencies that can be found in the vast majority of data centers around the world. RFG does not contest the facts as presented in the article but the Times failed to fully recognize all the causes that led to today's environment and the use of poor processes and practices. Therefore, it can only be partially fixed – cloud computing notwithstanding – until there is a true transformation in culture and mindset.

New York Times Article

The New York Times enumerated the following energy-related facts about data centers:

  • Most data centers, by design, consume vast amounts of energy
  • Online companies run their facilities 24x7 at maximum capacity regardless of demand
  • Data centers waste 90 percent or more of the electricity they consume
  • Worldwide digital warehouses use about 30 billion watts of energy; U.S. accounts for 25 to 33 percent of the load
  • McKinsey & Company found servers use only six to 12 percent of their power consumption on real work, on average; the rest of the time the servers are idle or in standby mode
  • International Data Corp. (IDC) estimates there are now more than three million data centers of varying sizes worldwide
  • U.S. data centers use about 76 billion kWh in 2010, or roughly two percent of all electricity used in the country that year, according to a study by Jonathan G. Koomey.
  • A study by Viridity Software Inc. found in one case where of 333 servers monitored, more than half were "comatose" – i.e., plugged in, using energy, but doing little if any work. Overall, the company found nearly 75 percent of all servers sampled had a utilization of less than 10 percent.
  • IT's low utilization "original sin" was the result of relying on software operating systems that crashed too much. Therefore, each system seldom ran more than one application and was always left on.
  • McKinsey's 2012 study currently finds servers run at six to 12 percent utilization, only slightly better than the 2008 results. Gartner Group also finds the typical utilization rates to be in the seven to 12 percent range.
  • In a typical data center when all power losses are included – infrastructure and IT systems – and combined with the low utilization rates, the energy wasted can be as much as 30 times the amount of electricity used for data processing.
  • In contrast the National Energy Research Scientific Computing Center (NERSCC), which uses server clusters and mainframes at the Lawrence Berkeley National Laboratory (LBNL), ran at 96.4 percent utilization in July.
  • Data centers must have spare capacity and backup so that they can handle traffic surges and provide high levels of availability. IT staff get bonuses for 99.999 percent availability, not for savings on the electric bill, according to an official at the Electric Power Research Institute.
  • In the Virginia area data centers now consume 500 million watts of electricity and projections are that this will grow to one billion over the next five years.
  • Some believe the use of clouds and virtualization may be a solution to this problem; however, other experts disagree.

Facts, Trends and Missed Opportunities

There are two headliners in the article that are buried deep within the text. The "original sin" was not relying on buggy software as stated. The issue is much deeper than that and it was a critical inflection point. And to prove the point the author states the NERSCC obtains utilization rates of 96.4 percent in July with mainframes and server clusters. Hence, the real story is that mainframes are a more energy efficient solution and the default option of putting workloads on distributed servers is not a best practice from a sustainability perspective.

In the 1990s the client server providers and their supporters convinced business and IT executives that the mainframe was dead and that the better solution was the client server generation of distributed processing. The theory was that hardware is cheap but people costs are expensive and therefore, the development productivity gains outweighed the operational flaws within the distributed environment. The mantra was unrelenting over the decade of the 90s and the myth took hold. Over time the story evolved to include the current x86-architected server environment and its operating systems. But now it is turning out that the theory – never verified factually – is falling apart and the quick reference to the 96.4 percent utilization achieved by using mainframes and clusters exposes the myth.

Let's take the key NY Times talking points individually.

  • Data centers do and will consume vast amounts of energy but the curve is bending downward
  • Companies are beginning to learn to not run their facilities at less than maximum capacity. This change is relatively new and there is a long way to go.
  • Newer technologies – hardware, software and cloud – will enable data centers to reduce waste to less than 20 percent. The average data center today more than half of their power consumption on non-IT infrastructure. This can be reduced drastically. Moreover, as the NERSCC shows, it is possible to drive utilization to greater than 90 percent.
  • The multiple data points that found the average server utilization to be in the six to 12 percent range demonstrated the poor utilization enterprises are getting from Unix and Intel servers. Where virtualization has been employed, the utilization rates are up but they still remain less than 30 percent on average. On the other hand, mainframes tend to operate at the 80 to 100 percent utilization level. Moreover, mainframes allow for shared data whereas distributed systems utilize a shared-nothing data model. This means more copies of data on more storage devices which means more energy consumption and inefficient processes.
  • Comatose servers are a distributed processing phenomenon, mostly with Intel servers. Asset management of the huge server farms created by the use of low-cost, single application, scale-out hardware is problematic. The complexity caused by the need for orchestration of the farms has hindered management from effectively managing the data center complex. New tools are constantly coming on board but change is occurring faster than the tools can be applied. As long as massive single-application server farms exist, the problem will remain.
  • Power losses can be reduced from 30 times that used to less than 1.5 times.
  • The NERSCC utilization achievement would not be possible without mainframes.
  • Over the next five years enterprises will learn how to reduce the spare capacity and backup capabilities of their data centers and rely upon cloud services to handle traffic surges and some of their backup/disaster recovery needs.
  • Most data center staffs are not measured on power usage as most shops do not allocate those costs to the IT budget. Energy consumption is usually charged to facilities departments.
  • If many of the above steps occur, plus use of other processes such as the lease-refresh-scale-up delivery model (vs the buy-hold-scale-out model) and the standardized operations platform model (vs development selected platform model), then the energy growth curve will be greatly abated, and could potentially end up using less power over time.

Operations standard platforms (cloud)

Greater standardization and reduced platform sprawl but more underutilized systems

Least cost

Development selected platforms

Most expensive

Greater technical currency with platform islands and sprawl

Model philosophies

Buy-hold-scale-out

 

 

Lease-refresh-scale-up

 

  •  Clouds and virtualization will be one solution to the problem but more is needed, as discussed above.

RFG POV: The mainframe myths have persisted too long and have led to greater complexity, higher data center costs, inefficiencies, and sub-optimization. RFG studies have found that had enterprises kept their data on the mainframe while applications were shifted to other platforms, companies would be far better off than they are today. Savings of up to 50 percent are possible. With future environments evolving to processing and storage nodes connected over multiple networks, it is logical to use zEnterprise solutions to simplify the data environment. IT executives should consider mainframe-architected solutions as one of their targeted environments as well as an approach to private clouds. Moreover, IT executives should discuss the shift to a lease-refresh-scale-up approach with their financial peers to see if and how it might work in their shops.