Browsing articles tagged with " model"

Service Delivery to Business Enablement: Data Center Edition

Apr 9, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Adam Braunstein

I have never been a fan of alarmist claims. Never have I witnessed the sky falling or the oceans abruptly swallowing masses of land. Nonetheless, we have all seen the air become unsafe to breathe in many parts of the world and rising water levels are certainly cause for concern. When rapid changes occur, those progressions do not take place overnight and often require a distanced perspective. Secondly, being paranoid does not mean one is wrong.

Such is the case with the shifts occurring in the data center. Business needs and disruptive technologies are more complex, frequent, and enduring despite their seemingly iterative nature. The gap between the deceptively calm exterior and true nature of internal data center changes threatens to leave IT executives unable to readily adapt to the seismic shifts taking place beneath the surface. Decisions made to address long-term needs are typically made using short-term metrics that mask the underlying movements themselves and the enterprise need to deal strategically with these changes. The failure to look at these issues as a whole will have a negative cascading effect on enterprise readiness in the future and is akin to France's Maginot Line of defense against Germany in World War II. While the fortifications prevented a direct attack, the tactic ignored the other strategic threats including a Belgium-based attack.

Three-Legged Stool:  Business, Technology, and Operations

The line between business and technology has blurred such that there is very little difference between the two. The old approach of using technology as a business enabler is no longer valid as IT no longer simply delivers the required business services. Business needs are now so dependent on technology that the planning and execution need to exist using same game plan, analytic tools, and measurements. Changes in one directly impact the other and continuous updates to strategic goals and tactical executions must be carefully weighed as the two move forward together. Business enablement is the new name of the game.

With business and technology successes and failures so closely fused together, it should be abundantly clear why shared goals and execution strategies are required. The new goalposts for efficient, flexible operations are defined in terms of software-defined data centers (SDDCs). Where disruptive technologies including automation, consolidation, orchestration and virtualization were previously the desired end state, SDDCs up the ante by providing logical views of platforms and infrastructures such that services can be spooled up, down, and changed dynamically without the limitations of physical constraints. While technology comprises the underpinnings here, the enablement of dynamic and changing business goals is the required outcome.

Operations practices and employee roles and skills will thus need to rapidly adapt. Metrics like data density, workload types and utilization will remain as baseline indicators but only as a means to more important measurements of agility, readiness, productivity, opportunity and revenue capture. Old technologies will need to be replaced to empower the necessary change, and those new technologies will need to be turned over at more rapid rates to continue to meet the heightened business pace as well as limited budgets. Budgeting and financial models will also need to follow suit.

The Aligned Business/IT Model of the Future: Asking the Right Questions

The fused business/IT future will need to be based around a holistic, evolving set of metrics that incorporate changing business dynamics, technology trends, and performance requirements. Hardware, software, storage, supporting infrastructure, processes, and people must all be evaluated to deliver the required views within and across data centers and into clouds. Moreover, IT executives should incorporate best-of-breed information enterprise data centers in both similar and competing industries.

The set of delivered dashboards should provide a macro view of data center operations with both business and IT outlooks and trending. Analysis should provide the following:

  • Benchmark current data center performance with comparative data;
  • Demonstrate opportunities for productivity and cost cutting improvements;
  • Provide insight as to the best and most cost effective ways to align the data center to be less complex, more scalable, and able to meet future business and technology opportunities;
  • Offer facilities to compare different scenarios as customers determine which opportunities best meet their needs.

Even though the reality of SDDCs is years away, IT executives must be travelling on the journey now. There are a number of intermediary milestones that must be achieved first and delays in reaching them will negatively impact the business. Use of data center analytical tools as described above will be needed to help chart the course and monitor progress. (The GreenWay Collaborative develops and provides tools of this nature. RFG initiated and still contributes to this effort.)

RFG POV: IT executives require a three-to-five year outlook that balances technology trends, operational best practices, and business goals. Immediate and long-range needs need to be plotted, moved, and continuously measured to mitigate immediate and long term needs. While many of these truths are evergreen, it is essential to recognize that the majority of enterprise tools and practices inadequately capture and harmonize the contributing factors. Most enterprise dashboard views evaluate data center performance at a tactical, operational level and identify opportunities for immediate performance improvements. Strategic enterprise dashboard tools tend to build on the data gathered at the tactical level and fail to incorporate evolving strategic business and technology needs. IT executives should incorporate strategic data center optimization planning tools which address the evolving business and technology needs to the mix so that IT can provide the optimum set of services to the business at each milestone. 

Blog: Green Data Centers an Oxymoron

Nov 30, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

The New York Times published "Power, Pollution and the Internet," an article on the dark side of data centers. The report, which was the result of a yearlong investigation, highlights the facts related to the environmental waste and inefficiencies that can be found in the vast majority of data centers around the world. RFG does not contest the facts as presented in the article but the Times failed to fully recognize all the causes that led to today's environment and the use of poor processes and practices. Therefore, it can only be partially fixed – cloud computing notwithstanding – until there is a true transformation in culture and mindset.

New York Times Article

The New York Times enumerated the following energy-related facts about data centers:

  • Most data centers, by design, consume vast amounts of energy
  • Online companies run their facilities 24x7 at maximum capacity regardless of demand
  • Data centers waste 90 percent or more of the electricity they consume
  • Worldwide digital warehouses use about 30 billion watts of energy; U.S. accounts for 25 to 33 percent of the load
  • McKinsey & Company found servers use only six to 12 percent of their power consumption on real work, on average; the rest of the time the servers are idle or in standby mode
  • International Data Corp. (IDC) estimates there are now more than three million data centers of varying sizes worldwide
  • U.S. data centers use about 76 billion kWh in 2010, or roughly two percent of all electricity used in the country that year, according to a study by Jonathan G. Koomey.
  • A study by Viridity Software Inc. found in one case where of 333 servers monitored, more than half were "comatose" – i.e., plugged in, using energy, but doing little if any work. Overall, the company found nearly 75 percent of all servers sampled had a utilization of less than 10 percent.
  • IT's low utilization "original sin" was the result of relying on software operating systems that crashed too much. Therefore, each system seldom ran more than one application and was always left on.
  • McKinsey's 2012 study currently finds servers run at six to 12 percent utilization, only slightly better than the 2008 results. Gartner Group also finds the typical utilization rates to be in the seven to 12 percent range.
  • In a typical data center when all power losses are included – infrastructure and IT systems – and combined with the low utilization rates, the energy wasted can be as much as 30 times the amount of electricity used for data processing.
  • In contrast the National Energy Research Scientific Computing Center (NERSCC), which uses server clusters and mainframes at the Lawrence Berkeley National Laboratory (LBNL), ran at 96.4 percent utilization in July.
  • Data centers must have spare capacity and backup so that they can handle traffic surges and provide high levels of availability. IT staff get bonuses for 99.999 percent availability, not for savings on the electric bill, according to an official at the Electric Power Research Institute.
  • In the Virginia area data centers now consume 500 million watts of electricity and projections are that this will grow to one billion over the next five years.
  • Some believe the use of clouds and virtualization may be a solution to this problem; however, other experts disagree.

Facts, Trends and Missed Opportunities

There are two headliners in the article that are buried deep within the text. The "original sin" was not relying on buggy software as stated. The issue is much deeper than that and it was a critical inflection point. And to prove the point the author states the NERSCC obtains utilization rates of 96.4 percent in July with mainframes and server clusters. Hence, the real story is that mainframes are a more energy efficient solution and the default option of putting workloads on distributed servers is not a best practice from a sustainability perspective.

In the 1990s the client server providers and their supporters convinced business and IT executives that the mainframe was dead and that the better solution was the client server generation of distributed processing. The theory was that hardware is cheap but people costs are expensive and therefore, the development productivity gains outweighed the operational flaws within the distributed environment. The mantra was unrelenting over the decade of the 90s and the myth took hold. Over time the story evolved to include the current x86-architected server environment and its operating systems. But now it is turning out that the theory – never verified factually – is falling apart and the quick reference to the 96.4 percent utilization achieved by using mainframes and clusters exposes the myth.

Let's take the key NY Times talking points individually.

  • Data centers do and will consume vast amounts of energy but the curve is bending downward
  • Companies are beginning to learn to not run their facilities at less than maximum capacity. This change is relatively new and there is a long way to go.
  • Newer technologies – hardware, software and cloud – will enable data centers to reduce waste to less than 20 percent. The average data center today more than half of their power consumption on non-IT infrastructure. This can be reduced drastically. Moreover, as the NERSCC shows, it is possible to drive utilization to greater than 90 percent.
  • The multiple data points that found the average server utilization to be in the six to 12 percent range demonstrated the poor utilization enterprises are getting from Unix and Intel servers. Where virtualization has been employed, the utilization rates are up but they still remain less than 30 percent on average. On the other hand, mainframes tend to operate at the 80 to 100 percent utilization level. Moreover, mainframes allow for shared data whereas distributed systems utilize a shared-nothing data model. This means more copies of data on more storage devices which means more energy consumption and inefficient processes.
  • Comatose servers are a distributed processing phenomenon, mostly with Intel servers. Asset management of the huge server farms created by the use of low-cost, single application, scale-out hardware is problematic. The complexity caused by the need for orchestration of the farms has hindered management from effectively managing the data center complex. New tools are constantly coming on board but change is occurring faster than the tools can be applied. As long as massive single-application server farms exist, the problem will remain.
  • Power losses can be reduced from 30 times that used to less than 1.5 times.
  • The NERSCC utilization achievement would not be possible without mainframes.
  • Over the next five years enterprises will learn how to reduce the spare capacity and backup capabilities of their data centers and rely upon cloud services to handle traffic surges and some of their backup/disaster recovery needs.
  • Most data center staffs are not measured on power usage as most shops do not allocate those costs to the IT budget. Energy consumption is usually charged to facilities departments.
  • If many of the above steps occur, plus use of other processes such as the lease-refresh-scale-up delivery model (vs the buy-hold-scale-out model) and the standardized operations platform model (vs development selected platform model), then the energy growth curve will be greatly abated, and could potentially end up using less power over time.

Operations standard platforms (cloud)

Greater standardization and reduced platform sprawl but more underutilized systems

Least cost

Development selected platforms

Most expensive

Greater technical currency with platform islands and sprawl

Model philosophies

Buy-hold-scale-out

 

 

Lease-refresh-scale-up

 

  •  Clouds and virtualization will be one solution to the problem but more is needed, as discussed above.

RFG POV: The mainframe myths have persisted too long and have led to greater complexity, higher data center costs, inefficiencies, and sub-optimization. RFG studies have found that had enterprises kept their data on the mainframe while applications were shifted to other platforms, companies would be far better off than they are today. Savings of up to 50 percent are possible. With future environments evolving to processing and storage nodes connected over multiple networks, it is logical to use zEnterprise solutions to simplify the data environment. IT executives should consider mainframe-architected solutions as one of their targeted environments as well as an approach to private clouds. Moreover, IT executives should discuss the shift to a lease-refresh-scale-up approach with their financial peers to see if and how it might work in their shops.