Browsing articles tagged with " Data Center"

Predictions: Tech Trends – part 1 – 2014

Jan 20, 2014   //   by admin   //   Blog  //  No Comments

RFG Perspective: The global economic headwinds in 2014, which constrain IT budgets, will force IT executives to question certain basic assumptions and reexamine current and target technology solutions. There are new waves of next-generation technologies emerging and maturing that challenge the existing status quo and deserve IT executive attention. These technologies will improve business outcomes as well as spark innovation and drive down the cost of IT services and solutions. IT executives will have to work with business executives fund the next-generation technologies or find self-funding approaches to implementing them. IT executives will also have to provide the leadership needed for properly selecting and implementing cloud solutions or control will be assumed by business executives that usually lack all the appropriate skills for tackling outsourced IT solutions.

As mentioned in the RFG blog "IT and the Global Economy – 2014" the global economic environment may not be as strong as expected, thereby keeping IT budgets contained or shrinking. Therefore, IT executives will need to invest in next-generation technology to contain costs, minimize risks, improve resource utilization, and deliver the desired business outcomes. Below are a few key areas that RFG believes will be the major technology initiatives that will get the most attention.

Tech-driven Business Transformation

 

 

 

 

 

 

 

 

 

 

 

 

 

Source: RFG
Analytics – In 2014, look for analytics service and solution providers to boost usability of their products to encompass the average non-technical knowledge worker by moving closer to a "Google-like" search and inquiry experience in order to broaden opportunities and increase market share.

Big Data – Big Data integration services and solutions will grab the spotlight this year as organizations continue to ratchet up the volume, variety and velocity of data while seeking increased visibility, veracity and insight from their Big Data sources.

Cloud – Infrastructure as a Service (IaaS) will continue to dominate as a cloud solution over Platform as a Service (PaaS), although the latter is expected to gain momentum and market share. Nonetheless, Software as a Service (SaaS) will remain the cloud revenue leader with Salesforce.com the dominant player. Amazon Web Services will retain its overall leadership of IaaS/PaaS providers with Google, IBM, and Microsoft Azure holding onto the next set of slots. Rackspace and Oracle have a struggle ahead to gain market share, even as OpenStack (an open cloud architecture) gains momentum.

Cloud Service Providers (CSPs) – CSPs will face stiffer competition and pricing pressures as larger players acquire or build new capabilities and new, innovative open-source based solutions enter the new year with momentum as large, influential organizations look to build and share their own private and public cloud standards and APIs to lower infrastructure costs.

Consolidation – Data center consolidation will continue as users move applications and services to the cloud and standardized internal platforms that are intended to become cloud-like. Advancements in cloud offerings along with a diminished concern for security (more of a false hope than reality) will lead to more small and mid-sized businesses (SMBs) to shift processing to the cloud and operate fewer internal data center sites. Large enterprises will look to utilize clouds and colocation sites for development/test environments and handling spikes in capacity rather than open or grow in-house sites.

Containerization – Containerization (or modularization) is gaining acceptance by many leading-edge companies, like Google and Microsoft, but overall adoption is slow, as IT executives have yet to figure out how to deal with the technology. It is worth noting that the power usage effectiveness (PUE) of these solutions is excellent and has been known to be as low as 1.05 (whereas the average remains around 1.90).

Data center transformation – In order to achieve the levels of operational efficiency required, IT executives will have to increase their commitment to data center transformation. The productivity improvements will be achieved through the use of the shift from standalone vertical stack management to horizontal layer management, relationship management, and use of cloud technologies. One of the biggest effects of this shift is an actual reduction in operations headcount and reorientation of skills and talents to the new processes. IT executives should look for the transformation to be a minimum of a three year process. However, IT operations executives should not expect clear sailing as development shops will push back to prevent loss of control of their application environments.

3-D printing – 2014 will see the beginning of 3-D printing taking hold. Over time the use of 3-D printing will revolutionize the way companies produce materials and provide support services. Leading-edge companies will be the first to apply the technology this year and thereby gain a competitive advantage.

Energy efficiency/sustainability – While this is not new news in 2014, IT executives should be making it a part of other initiatives and a procurement requirement. RFG studies find that energy savings is just the tip of the iceberg (about 10 percent) that can be achieved when taking advantage of newer technologies. RFG studies show that in many cases the energy savings from removing hardware kept more than 40 months can usually pay for new better utilized equipment. Or, as an Intel study found, servers more than four years old accounted for four percent of the relative performance capacity yet consumed 60 percent of the power.

Hyperscale computing (HPC) – RFG views hyperscale computing as the next wave of computing that will replace the low end of the traditional x86 server market. The space is still in its infancy, with the primary players Advanced Micro Devices (AMD) SeaMicro solutions and Hewlett-Packard's (HP's) Moonshot server line. While penetration will be low in 2014, the value proposition for HPC solutions should be come evident.

Integrated systems – Integrated systems is a poorly defined computing technology that encompasses converged architecture, expert systems, and partially integrated systems as well as expert integrated systems. The major players in this space are Cisco, EMC, Dell, HP, IBM, and Oracle. While these systems have been on the market for more than a year now, revenues are still limited (depending upon whom one talks to, revenues may now exceed $1 billion globally) and adoption moving slowly. Truly integrated systems do result in productivity, time and cost savings and IT executives should be piloting them in 2014 to determine the role and value they can play in the corporate data centers.

Internet of things – More and more sensors are being employed and imbedded in appliances and other products, which will automate and improve life in IT and in the physical world. From an data center information management (DCIM), these sensors will enable IT operations staff to better monitor and manage system capacity and utilization. 2014 will see further advancements and inroads made in this area.

Linux/open source – The trend toward Linux and open source technologies continues with both picking up market share as IT shops find the costs are lower and they no longer need to be dependent upon vendor-provided support. Linux and other open technologies are now accepted because they provide agility, choice, and interoperability. According to a recent survey, a majority of users are now running Linux in their server environments, with more than 40 percent using Linux as either their primary server operating system or as one of their top server platforms. (Microsoft still has the advantage in the x86 platform space and will for some time to come.) OpenStack and the KVM hypervisor will continue to acquire supporting vendors and solutions as players look for solutions that do not lock them into proprietary offerings with limited ways forward. A Red Hat survey of 200 U.S. enterprise decision makers found that internal development of private cloud platforms has left organizations with numerous challenges such as application management, IT management, and resource management. To address these issues, organizations are moving or planning a move to OpenStack for private cloud initiatives, respondents claimed. Additionally, a recent OpenStack user survey indicated that 62 percent of OpenStack deployments use KVM as the hypervisor of choice.

Outsourcing – IT executives will be looking for more ways to improve outsourcing transparency and cost control in 2014. Outsourcers will have to step up to the SLA challenge (mentioned in the People and Process Trends 2014 blog) as well as provide better visibility into change management, incident management, projects, and project management. Correspondingly, with better visibility there will be a shift away from fixed priced engagements to ones with fixed and variable funding pools. Additionally, IT executives will be pushing for more contract flexibility, including payment terms. Application hosting displaced application development in 2013 as the most frequently outsourced function and 2014 will see the trend continue. The outsourcing of ecommerce operations and disaster recovery will be seen as having strong value propositions when compared to performing the work in-house. However, one cannot assume outsourcing is less expensive than handling the tasks internally.

Software defined x – Software defined networks, storage, data centers, etc. are all the latest hype. The trouble with all new technologies of this type is that the initial hype will not match reality. The new software defined market is quite immature and all the needed functionality will not be out in the early releases. Therefore, one can expect 2014 to be a year of disappointments for software defined solutions. However, over the next three to five years it will mature and start to become a usable reality.

Storage - Flash SSD et al – Storage is once again going through revolutionary changes. Flash, solid state drives (SSD), thin provisioning, tiering, and virtualization are advancing at a rapid pace as are the densities and power consumption curves. Tier one to tier four storage has been expanded to a number of different tier zero options – from storage inside the computer to PCIe cards to all flash solutions. 2014 will see more of the same with adoption of the newer technologies gaining speed. Most data centers are heavily loaded with hard disk drives (HDDs), a good number of which are short stroked. IT executives need to experiment with the myriad of storage choices and understand the different rationales for each. RFG expects the tighter integration of storage and servers to begin to take hold in a number of organizations as executives find the closer placement of the two will improve performance at a reasonable cost point.

RFG POV: 2014 will likely be a less daunting year for IT executives but keeping pace with technology advances will have to be part of any IT strategy if executives hope to achieve their goals for the year and keep their companies competitive. This will require IT to understand the rate of technology change and adapt a data center transformation plan that incorporates the new technologies at the appropriate pace. Additionally, IT executives will need to invest annually in new technologies to help contain costs, minimize risks, and improve resource utilization. IT executives should consider a turnover plan that upgrades (and transforms) a third of the data center each year. IT executives should collaborate with business and financial executives so that IT budgets and plans are integrated with the business and remain so throughout the year.

Disruptive Changes

Apr 25, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Amazon Inc. and Microsoft Corp. lowered their pricing for certain cloud offerings in attempts to maintain leadership and/or preserve customers. Similarly, Hewlett-Packard Co. (HP) launched its next-generation Moonshot hyperscale servers. Meanwhile, IDG Connect, the demand generation division of International Data Group (IDG), released its survey findings that show there may be a skills shortage when it comes to the soft skills required when communicating beyond the IT walls.

Focal Points:

  • Earlier this month Amazon price reduced the prices it charged for its Windows on-demand servers by up to 26 percent. This brought its pricings within pennies of Microsoft's Windows Azure cloud fees. The price reductions apply across Amazon's standard (m1), second-generation standard (m3), high-memory (m2), and high-CPU (c1) instance families. CEO Jeff Bezos stated in the Amazon annual report the strategy of cutting prices before the company needs to, and developing technologies before there is a financially motivating factor, is what protects the company from unexpected markets shifts. Microsoft has responded by aggressively cutting its prices by 21 to 33 percent for hosting and processing customer online data. In order for customers to qualify for the cuts they must make monthly commitments to Azure for either six or 12 months. Microsoft also is making its Windows Azure Virtual Network technology (codenamed "Brooklyn") generally available effective April 16. Windows Azure Virtual Network is designed to allow companies to extend their networks by enabling secure site-to-site VPN connectivity between the enterprise and the Windows Azure cloud.
  • HP launched its initial Moonshot servers, which use Intel Corp. Atom low-cost, low-energy microprocessors, This next-generation of servers is the first wave of hyperscale software defined server computing models to be offered by HP. These particular servers are designed to be used in dedicated hosting and Web front end environments. The company stated that two more "leaps" will be out this year that will be targeted to handle other specific workloads. HP claims its architecture can scale 10:1 over existing offerings while providing eight times the efficiency. The Moonshot 1500 uses Intel Atom S1200 microprocessors, utilizes a 4.3U (7.5 inch tall) chassis that hosts 45 "Gemini" server cartridges, and up to 1800 quad-core servers will fit into a 42U rack. Other x86 chips from Advanced Micro Devices Inc. (AMD), plus ARM processors from Calxeda Inc., Texas Instruments Inc., and Applied Micro Circuits Corp. (AMCC) are also expected to be available in the "Gemini" cartridge form factor. The first Moonshot servers support Linux, but are compatible with Windows, VMware and traditional enterprise applications. Pricing starts at $61,875 for the enclosure, 45 HP ProLiant Moonshot servers and an integrated switch, according to HP officials. (For more on this topic see this week's Research Note "HP's Moonshot – the Launch.")
  • According to a new study by IDG Connect, 83 percent of European respondents believe there is no IT skills shortage while 93 percent of U.S. respondents definitely feel there is a gap between the technical skills IT staff possess and the skills needed by the respondents' companies. IDG attributes this glaring differentiation to what are loosely defined as "hard" (true technical skills and competencies) and "soft" (business, behavioral, communications, and interpersonal) skills. The European respondents focused on hard skills while their American counterparts were more concerned about the soft skills, which will become more prevalent within IT as it goes through a transformation to support the next-generation data center environments and greater integration with the business. As IT becomes more integrated with the business and operational skill requirements shift, IDG concludes "companies can only be as good as the individuals that work within them. People … are capable of creative leaps of thinking and greatness that surpass all machines. This means that any discussion on IT skills, and any decision on the qualities required for future progression are fundamental to innovation. This is especially true in IT, where the role of the CIO is rapidly expanding within the enterprise and the department as a whole is becoming increasingly important to the entire business. It seems IT is forever teetering on the brink of bigger and better things - and it is up to the people within it to maximize this potential."

RFG POV: IT always exists in a state of disruptive innovation and the next decade will be no different. Whether it is a shift to the cloud, hyperscale computing, software-defined data center or other technological shifts, IT must be prepared to deal with the business and pricing models that arise. Jeff Bezos is correct by not sitting on his laurels and constantly pushing the envelope in pricing and services. IT executives need to do the same and deliver comparable services at prices that appeal to the business while covering costs. This requires keeping current on technology and having the staff on board that can solve the business problems and deliver innovative solutions that enable the organization to remain competitive. RFG expects the staffing dilemma to emerge over the next few years as data centers transform to meet the next generation of business and IT needs. At that time most IT staff will not need the current skills they use but skills that allow them to work with the business, providers and others to deliver solutions built on logical platforms (rather than physical infrastructure). Only a few staff will need to know the nuts and bolts of the hardware and physical layouts. This paradigm shift in staff capabilities and skills must be anticipated if IT executives do not want to be caught behind the curve and left to struggle with catching up with demand. IT executives should be developing their next-generation IT development and operations strategies, determining skills needed and the gap, and then begin a career planning and weaning-out process so that IT will be able to provide the leadership and skills needed to support the business over the next decade of disruptive innovation. Additionally, IT executives should determine if Moonshot servers are applicable in their current or target environments, and if so, conduct a pilot when the time is right. 

Service Delivery to Business Enablement: Data Center Edition

Apr 9, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Adam Braunstein

I have never been a fan of alarmist claims. Never have I witnessed the sky falling or the oceans abruptly swallowing masses of land. Nonetheless, we have all seen the air become unsafe to breathe in many parts of the world and rising water levels are certainly cause for concern. When rapid changes occur, those progressions do not take place overnight and often require a distanced perspective. Secondly, being paranoid does not mean one is wrong.

Such is the case with the shifts occurring in the data center. Business needs and disruptive technologies are more complex, frequent, and enduring despite their seemingly iterative nature. The gap between the deceptively calm exterior and true nature of internal data center changes threatens to leave IT executives unable to readily adapt to the seismic shifts taking place beneath the surface. Decisions made to address long-term needs are typically made using short-term metrics that mask the underlying movements themselves and the enterprise need to deal strategically with these changes. The failure to look at these issues as a whole will have a negative cascading effect on enterprise readiness in the future and is akin to France's Maginot Line of defense against Germany in World War II. While the fortifications prevented a direct attack, the tactic ignored the other strategic threats including a Belgium-based attack.

Three-Legged Stool:  Business, Technology, and Operations

The line between business and technology has blurred such that there is very little difference between the two. The old approach of using technology as a business enabler is no longer valid as IT no longer simply delivers the required business services. Business needs are now so dependent on technology that the planning and execution need to exist using same game plan, analytic tools, and measurements. Changes in one directly impact the other and continuous updates to strategic goals and tactical executions must be carefully weighed as the two move forward together. Business enablement is the new name of the game.

With business and technology successes and failures so closely fused together, it should be abundantly clear why shared goals and execution strategies are required. The new goalposts for efficient, flexible operations are defined in terms of software-defined data centers (SDDCs). Where disruptive technologies including automation, consolidation, orchestration and virtualization were previously the desired end state, SDDCs up the ante by providing logical views of platforms and infrastructures such that services can be spooled up, down, and changed dynamically without the limitations of physical constraints. While technology comprises the underpinnings here, the enablement of dynamic and changing business goals is the required outcome.

Operations practices and employee roles and skills will thus need to rapidly adapt. Metrics like data density, workload types and utilization will remain as baseline indicators but only as a means to more important measurements of agility, readiness, productivity, opportunity and revenue capture. Old technologies will need to be replaced to empower the necessary change, and those new technologies will need to be turned over at more rapid rates to continue to meet the heightened business pace as well as limited budgets. Budgeting and financial models will also need to follow suit.

The Aligned Business/IT Model of the Future: Asking the Right Questions

The fused business/IT future will need to be based around a holistic, evolving set of metrics that incorporate changing business dynamics, technology trends, and performance requirements. Hardware, software, storage, supporting infrastructure, processes, and people must all be evaluated to deliver the required views within and across data centers and into clouds. Moreover, IT executives should incorporate best-of-breed information enterprise data centers in both similar and competing industries.

The set of delivered dashboards should provide a macro view of data center operations with both business and IT outlooks and trending. Analysis should provide the following:

  • Benchmark current data center performance with comparative data;
  • Demonstrate opportunities for productivity and cost cutting improvements;
  • Provide insight as to the best and most cost effective ways to align the data center to be less complex, more scalable, and able to meet future business and technology opportunities;
  • Offer facilities to compare different scenarios as customers determine which opportunities best meet their needs.

Even though the reality of SDDCs is years away, IT executives must be travelling on the journey now. There are a number of intermediary milestones that must be achieved first and delays in reaching them will negatively impact the business. Use of data center analytical tools as described above will be needed to help chart the course and monitor progress. (The GreenWay Collaborative develops and provides tools of this nature. RFG initiated and still contributes to this effort.)

RFG POV: IT executives require a three-to-five year outlook that balances technology trends, operational best practices, and business goals. Immediate and long-range needs need to be plotted, moved, and continuously measured to mitigate immediate and long term needs. While many of these truths are evergreen, it is essential to recognize that the majority of enterprise tools and practices inadequately capture and harmonize the contributing factors. Most enterprise dashboard views evaluate data center performance at a tactical, operational level and identify opportunities for immediate performance improvements. Strategic enterprise dashboard tools tend to build on the data gathered at the tactical level and fail to incorporate evolving strategic business and technology needs. IT executives should incorporate strategic data center optimization planning tools which address the evolving business and technology needs to the mix so that IT can provide the optimum set of services to the business at each milestone. 

Blog: Data Center Optimization Planning

Dec 13, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Every organization should be performing a data center optimization planning effort at least annually. The rate of technology change and the exploding requirements for capacity demand IT shops challenge their assumptions yearly and revisit best practices to see how they can further optimize their operations. Keeping up with storage capacity requirements with flat budgets can be a challenge in that capacity is growing between 20-40 percent annually. This phenomenon is occurring across the IT landscape. Thus, if IT executives want to transform their operations from spending 70-80 percent of their budgets on operations to more than half the budget spent on development and innovation instead, executives must invest in planning that enables such change.

Optimization planning needs to cover all areas of the data center:

  • facilities,
  • finance,
  • governance,
  • IT infrastructure and systems,
  • processes, and
  • staffing.

RFG finds most companies are greatly overspending due to the inefficiencies of continuing along non-optimized paths in each of the areas; thereby providing companies with the opportunity to reduce operational expenses by more than 10 percent per year for the next decade. In fact, in some areas more than 20 percent could be shaved off.

Facilities.  At a high level, the three areas that IT executives should understand, evaluate, and monitor are facilities design and engineering, power usage effectiveness (PUE), and temperature. Most data center facilities were designed to handle the equipment of the previous century. Times and technologies have changed significantly since then and the designs and engineering assumptions and actual implementations need to be reevaluated. In a similar vein, the PUE for must data centers is far from optimized, which could be resulting in overpaying energy bills by more than 40 percent. On the "easy to fix" front, companies can raise their data center temperatures to normal room temperature or higher, with temperatures in the 80° F range being possible. Just about all equipment built today is designed to operate at temperatures greater than 100° F. For every degree raised organizations can expect to see power costs reduced by up to four percent. Additionally, facilities and IT executives can monitor their greenhouse gas (GHG) emissions, which are frequently tracked by chief sustainability officers and can be used as a measure of savings achieved by IT operational efficiency gains.

Finance.  IT costs can be reduced through use of four key factors: asset management, chargebacks, life cycle management, and procurement. RFG finds many companies are not handling asset management well, which is resulting in an overage of hardware and software being paid for annually. Studies have found this excess cost could easily run up to 20 percent of all expenses for end-user devices. The use of chargebacks better ensures IT costs are aligned with user requirements. This especially comes into play when funding external and internal support services. When it comes to life cycle management, RFG finds too many companies are retaining hardware too long. The optimal life span for servers and storage is 36-40 months. Companies that retain this equipment for longer periods can be driving up their overall costs by more than 20 percent. Moreover, the one area that IT consistently fails to understand and underperforms on is procurement. When proper procurement processes and procedures are not followed and standardized, IT can easily spend 50 percent more on hardware, software and services.

Governance.  The reason governance is a key area of focus is that governance assures performance targets are established and tracked and that an ongoing continuous improvement program is getting the attention it needs. Additionally, governance can ensure that the reasonable risk exposure levels are maintained while the transformation is ongoing.

IT infrastructure and systems.  For each of the IT components – applications, networks, servers, and storage – IT executives should be able to monitor availability, utilization levels, and virtualization levels as well as automation level. The greater the levels the fewer human resources required to support the operations and the more staffing becomes an independent variable, rather than one dependent upon the numbers and types of hardware  and software used. Companies also frequently fail to match workload types to the infrastructure most optimized to those workloads, resulting in overspend that can reach 15-30 percent of operating costs for those systems.

Processes.  The major processes that IT management should be following are application instances (especially CRM and ERP), capacity management, provisioning (and decommissioning) rates, storage tiers, and service levels. The better a company is at capacity planning (and use of clouds) the lower the cost of operations. The faster the provisioning capability the fewer human resources required to support operational changes and the likelihood of less downtime due to human error. Additionally, RFG finds the more storage tiers and automation of movement of data amongst tiers the greater the savings. As a rule of thumb organizations should find the savings as one moves from tier n to tier n+1 to be 50 percent. In addition to tiering, compression and deduplication are other approaches to storage optimization.

Staffing.  For most companies today, staffing levels are directly proportional to the number of servers, storage, network nodes, etc. The shift to virtualization and automatic orchestration of activities breaks that bond. RFG finds it is now possible for hundreds of servers to be supported by a single administrator and tens to hundreds of terabytes handled by a single database administrator. IT executives should also be looking to cross-pollinate staff so that an administrator can support and of the hardware and operating systems.

The above possibilities are what exist today. Technology is constantly improving. The gains will be even greater as time goes on, especially since the technical improvements are more exponential than linear. IT executives should be able to plug these concepts into development of a data center optimization plan and then monitor results on an ongoing basis.

RFG POV: There still remains tremendous waste in the way IT operations are run today. IT executives should be able to reduce costs by more than 40 percent, enabling them to invest more in enhancing current applications and innovation than in keeping the lights on. Moreover, IT executives should be able to cut annual costs by 10 percent per year and potentially keep 40 percent of the savings to invest in self-funding new solutions that can further improve operations. 

Blog: Green Data Centers an Oxymoron

Nov 30, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

The New York Times published "Power, Pollution and the Internet," an article on the dark side of data centers. The report, which was the result of a yearlong investigation, highlights the facts related to the environmental waste and inefficiencies that can be found in the vast majority of data centers around the world. RFG does not contest the facts as presented in the article but the Times failed to fully recognize all the causes that led to today's environment and the use of poor processes and practices. Therefore, it can only be partially fixed – cloud computing notwithstanding – until there is a true transformation in culture and mindset.

New York Times Article

The New York Times enumerated the following energy-related facts about data centers:

  • Most data centers, by design, consume vast amounts of energy
  • Online companies run their facilities 24x7 at maximum capacity regardless of demand
  • Data centers waste 90 percent or more of the electricity they consume
  • Worldwide digital warehouses use about 30 billion watts of energy; U.S. accounts for 25 to 33 percent of the load
  • McKinsey & Company found servers use only six to 12 percent of their power consumption on real work, on average; the rest of the time the servers are idle or in standby mode
  • International Data Corp. (IDC) estimates there are now more than three million data centers of varying sizes worldwide
  • U.S. data centers use about 76 billion kWh in 2010, or roughly two percent of all electricity used in the country that year, according to a study by Jonathan G. Koomey.
  • A study by Viridity Software Inc. found in one case where of 333 servers monitored, more than half were "comatose" – i.e., plugged in, using energy, but doing little if any work. Overall, the company found nearly 75 percent of all servers sampled had a utilization of less than 10 percent.
  • IT's low utilization "original sin" was the result of relying on software operating systems that crashed too much. Therefore, each system seldom ran more than one application and was always left on.
  • McKinsey's 2012 study currently finds servers run at six to 12 percent utilization, only slightly better than the 2008 results. Gartner Group also finds the typical utilization rates to be in the seven to 12 percent range.
  • In a typical data center when all power losses are included – infrastructure and IT systems – and combined with the low utilization rates, the energy wasted can be as much as 30 times the amount of electricity used for data processing.
  • In contrast the National Energy Research Scientific Computing Center (NERSCC), which uses server clusters and mainframes at the Lawrence Berkeley National Laboratory (LBNL), ran at 96.4 percent utilization in July.
  • Data centers must have spare capacity and backup so that they can handle traffic surges and provide high levels of availability. IT staff get bonuses for 99.999 percent availability, not for savings on the electric bill, according to an official at the Electric Power Research Institute.
  • In the Virginia area data centers now consume 500 million watts of electricity and projections are that this will grow to one billion over the next five years.
  • Some believe the use of clouds and virtualization may be a solution to this problem; however, other experts disagree.

Facts, Trends and Missed Opportunities

There are two headliners in the article that are buried deep within the text. The "original sin" was not relying on buggy software as stated. The issue is much deeper than that and it was a critical inflection point. And to prove the point the author states the NERSCC obtains utilization rates of 96.4 percent in July with mainframes and server clusters. Hence, the real story is that mainframes are a more energy efficient solution and the default option of putting workloads on distributed servers is not a best practice from a sustainability perspective.

In the 1990s the client server providers and their supporters convinced business and IT executives that the mainframe was dead and that the better solution was the client server generation of distributed processing. The theory was that hardware is cheap but people costs are expensive and therefore, the development productivity gains outweighed the operational flaws within the distributed environment. The mantra was unrelenting over the decade of the 90s and the myth took hold. Over time the story evolved to include the current x86-architected server environment and its operating systems. But now it is turning out that the theory – never verified factually – is falling apart and the quick reference to the 96.4 percent utilization achieved by using mainframes and clusters exposes the myth.

Let's take the key NY Times talking points individually.

  • Data centers do and will consume vast amounts of energy but the curve is bending downward
  • Companies are beginning to learn to not run their facilities at less than maximum capacity. This change is relatively new and there is a long way to go.
  • Newer technologies – hardware, software and cloud – will enable data centers to reduce waste to less than 20 percent. The average data center today more than half of their power consumption on non-IT infrastructure. This can be reduced drastically. Moreover, as the NERSCC shows, it is possible to drive utilization to greater than 90 percent.
  • The multiple data points that found the average server utilization to be in the six to 12 percent range demonstrated the poor utilization enterprises are getting from Unix and Intel servers. Where virtualization has been employed, the utilization rates are up but they still remain less than 30 percent on average. On the other hand, mainframes tend to operate at the 80 to 100 percent utilization level. Moreover, mainframes allow for shared data whereas distributed systems utilize a shared-nothing data model. This means more copies of data on more storage devices which means more energy consumption and inefficient processes.
  • Comatose servers are a distributed processing phenomenon, mostly with Intel servers. Asset management of the huge server farms created by the use of low-cost, single application, scale-out hardware is problematic. The complexity caused by the need for orchestration of the farms has hindered management from effectively managing the data center complex. New tools are constantly coming on board but change is occurring faster than the tools can be applied. As long as massive single-application server farms exist, the problem will remain.
  • Power losses can be reduced from 30 times that used to less than 1.5 times.
  • The NERSCC utilization achievement would not be possible without mainframes.
  • Over the next five years enterprises will learn how to reduce the spare capacity and backup capabilities of their data centers and rely upon cloud services to handle traffic surges and some of their backup/disaster recovery needs.
  • Most data center staffs are not measured on power usage as most shops do not allocate those costs to the IT budget. Energy consumption is usually charged to facilities departments.
  • If many of the above steps occur, plus use of other processes such as the lease-refresh-scale-up delivery model (vs the buy-hold-scale-out model) and the standardized operations platform model (vs development selected platform model), then the energy growth curve will be greatly abated, and could potentially end up using less power over time.

Operations standard platforms (cloud)

Greater standardization and reduced platform sprawl but more underutilized systems

Least cost

Development selected platforms

Most expensive

Greater technical currency with platform islands and sprawl

Model philosophies

Buy-hold-scale-out

 

 

Lease-refresh-scale-up

 

  •  Clouds and virtualization will be one solution to the problem but more is needed, as discussed above.

RFG POV: The mainframe myths have persisted too long and have led to greater complexity, higher data center costs, inefficiencies, and sub-optimization. RFG studies have found that had enterprises kept their data on the mainframe while applications were shifted to other platforms, companies would be far better off than they are today. Savings of up to 50 percent are possible. With future environments evolving to processing and storage nodes connected over multiple networks, it is logical to use zEnterprise solutions to simplify the data environment. IT executives should consider mainframe-architected solutions as one of their targeted environments as well as an approach to private clouds. Moreover, IT executives should discuss the shift to a lease-refresh-scale-up approach with their financial peers to see if and how it might work in their shops.

Progress – Slow Going

Aug 13, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

According to Uptime Institute's recently released 2012 Data Center Industry Survey, enterprises are lukewarm about sustainability whereas a report released by MeriTalk finds federal executives see IT as a cost and not as part of the solution. In other news, the latest IQNavigator Inc. temporary worker index shows temporary labor rates are slowly rising in the U.S.

Focal Points:

  • According to Uptime Institute's recently released 2012 Data Center Industry Survey, more than half of the enterprise respondents stated energy savings were important but few have financial incentives in place to drive change. Only 20 percent of the organizations' IT departments pay the data center power bill; corporate real estate or facilities is the primary payee. In Asia it is worse: only 10 percent of IT departments pay for power. When it comes to an interest in pursuing a green certification for current or future data centers, slightly less than 50 percent were interested. 29 percent of organizations do not measure power usage effectiveness (PUE); for environments with 500 servers or less, nearly half do not measure PUE. Of those that do, more precise measurement methods are being employed this year over last. The average global, self-reported PUE from the survey was between 1.8 and 1.89. Nine percent of the respondents reported a PUE of 2.5 or greater while 10 percent claimed a PUE of 1.39 or less. Precision cooling strategies are improving but there remains a long way to go. Almost one-third of respondents monitor temperatures at the room level while only 16 percent check it at the most relevant location: the server inlet. Only one-third of respondents cited their firms have adopted tools to identify underutilized servers and devices.
  • A survey of 279 non-IT federal executives by MeriTalk, an online community and resource for government IT, finds more than half of the respondents said their top priorities include streamlining business processes. Nearly 40 percent of the executives cited cutting waste as their most important mission, and 32 percent said increasing accountability placed first on their to-do list. Moreover, less than half of the executives think of IT as an opportunity versus a cost while 56 percent stated IT helps support their daily operations. Even worse, less than 25 percent of the executives feel IT lends them a hand in providing analytics to support business decisions, saving money and increasing efficiency, or improving constituent processes or services. On the other hand, 95 percent of federal executives agree their agency could see substantial savings with IT modernization.
  • IQNavigator, a contingent workforce software and managed service provider, released its second quarter 2012 temporary worker rate change index for the U.S. Overall, the national rate trend for 2012 has been slowly rising and now sits five percentage points above the January 2008 baseline. However, the detail breakdown shows no growth in the professional-management job sector but movement from negative to 1.2 percent positive in the technical-IT sector. Since the rate of increase over the past six months remains less than the inflation rate over the same period, the company feels it is unclear whether or not the trend implies upwards pressure on labor rates. The firm also points out that the U.S. Bureau of Labor Statistics (BOL) underscores the importance of temporary labor as new hires increasingly are being made through temporary employment agencies. In fact, although temporary agency employees constitute less than two percent of the total U.S. non-farm labor force, 15 percent of all new jobs created in the U.S. in 2012 have been through temp agency placements.

RFG POV: Company executives may vocalize their support for sustainability but most have not established financial incentives designed to drive a transformation of their data centers to be best of breed "green IT" shops. Executives still fail to recognize that being green is not just good for the environment but it mobilizes the company to optimize resources and pursue best practices. Businesses continue to waste up to 40 percent of their IT budgets because they fail to connect the dots. Furthermore, the MeriTalk federal study reveals how far behind the private sector the U.S. federal government is. While businesses are utilizing IT as a differentiator to attain their goals, drive revenues and cut costs, the government perceives IT only as a cost center. Federal executives should modify their business processes, align and link their development projects to their operations, and fund their operations holistically. This will eliminate the sub-optimization and propel the transformation of U.S. government IT more rapidly. With the global and U.S. economies remaining weak over the mid- to long-term, the use of contingent workforce will expand. Enterprises do not like to make long-term investments in personnel when the business and regulatory climate is not friendly to growth. Hence, contingent workforce – domestic or overseas – will pick up the slack. IT executives should utilize a balanced approach with a broad range of workforce strategies to achieve agility and flexibility while ensuring business continuity, corporate knowledge, and management and technical control are properly addressed. 

Gray Clouds on the Horizon

Aug 3, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

 

According to two recent studies global IT spending is slowing while cloud adoption (excluding service providers) is occurring at a slower rate than projected. Elsewhere, according to a report released by outplacement firm Challenger, Gray & Christmas, layoffs in the technology sector for the first half of 2012 are at the highest levels seen in three years. Lastly, an Oracle Corp. big data survey finds companies are collecting more data than ever before but may be losing on average 14 percent of incremental revenue per year by not fully leveraging the information.

Focal Points:

  • According to a new Gartner Inc. report, global IT spending percentage growth for 2012 is projected to be 3.0 percent, down from 2011 spending growth of 7.9 percent. The brightest spot in the analysis was that the telecom equipment category will grow by 10.8 percent – however that is down from 17.5 percent in the previous year. All the other categories – computer hardware, enterprise software, IT services, and telecom services – are growing slowly between 1.4 percent (telecom services) and 4.3 percent (enterprise software). The drop in spending is attributed to the global economic stresses – the eurozone crisis, weaker U.S. recovery, a slowdown in China, etc. For 2013 Gartner is projecting higher spending on hardware and software in the data center and on the desktop, better growth on telecom hardware (but down from 2012), and slightly higher spending on telecom services. In support of these projections is the latest Challenger, Gray report that shows during the first half of the year, 51,529 planned job cuts were announced across the tech sector. This represents a 260 percent increase over the 14,308 layoffs planned during the first half of 2011. Job cuts are so steep this year that the figure is 39 percent higher than all the job cuts recorded in the tech sector last year. Three tech companies are responsible for most of the job losses – Hewlett-Packard Co. (HP) announced it was slicing headcount by 30,000 and Nokia Corp. and Sony Corp. are each reducing staffing by 10,000. While the outplacement firm expected more cuts to be made over the course of the next six months, it does see bright spots in sectors of the business.
  • According to Uptime Institute's recently released 2012 Data Center Industry Survey, cloud deployments have significantly increased globally over the past year. 25 percent of this year's respondents claimed they were adopting public clouds while another 30 percent said they were considering it. Additionally, 49 percent were moving to private clouds while another 37 percent were considering it. In 2011 only 16 percent of respondents stated they had deployed public clouds whereas 35 percent claimed they had deployed private clouds. 32 percent of large organizations use the public cloud, whereas 19 percent of small organizations and 10 percent of "traditional enterprises" employ public clouds. When it comes to private clouds, 65 percent of large organizations have claimed to have deployed private cloud but only 39 percent of small and mid-sized organizations were doing so. Public cloud adoption rates are 52 percent in Asia, 28 percent in Europe, and 22 percent in North America. Private cloud adoption rates are 42 percent in Asia, 52 percent in Europe, and 50 percent in the U.S. Cost savings and scalability were the top two reasons given for moving to the cloud while security was the major inhibitor for not adopting cloud computing (27 and 23 percent respectively), followed distantly by compliance and regulatory issues (64 and 27 percent respectively).
  • Oracle announced the results of its big data study, in which 333 C-level executives from U.S. and Canadian enterprises were surveyed. The study examined the pain points that companies face regarding managing the deluge of data that organizations must deal with and how well they are using that information to drive profit and growth. 94 percent of respondents claimed growth with the biggest data growth areas in the areas of customer information (48 percent), operations (34 percent) and sales and marketing (33 percent).  29 percent of executives give their organization a "D" or "F" in preparedness to manage the data influx, while 93 percent of respondents believe their organization is losing revenue opportunities. The projected revenue loss for companies with revenues in excess of $1 billion is estimated to be approximately 13 percent of annual revenue from not fully leveraging the information. Most respondents are frustrated with their organizations' data gathering and distribution systems and almost all are looking to invest in improving information optimization. The communications industry is the most satisfied with its ability to deal with data – 20 percent gave their firms an "A." Executives in public sector, healthcare and utilities industries stated they were the least prepared to handle the data volumes and velocities. 41 percent of public sector executives, 40 percent of healthcare executives, and 39 percent of utilities executives rating themselves with either a "D" or "F" preparedness rating.

 

RFG POV: The global economic appears to be weak, with parts of Europe in or close to recession, Asia slowing rapidly, and the U.S. in weak positive territory. Economists see more storm clouds on the horizon – few see things improving in 2012. This will trickle down to IT budgets, with many companies requesting deferrals of capital spending and/or headcount growth. IT executives need to continue their push to slash operational expenditures through better resource optimization and improvements in best practices. RFG still finds a most IT executives pursue practices that are no longer valid, which results in up to 40 percent of operational expenditures being wasted. Cloud computing can assist enterprises in their quest to reduce costs but there are tradeoffs and they need to be understood before leaping into a cloud environment. Most corporate data is no longer an island and needs to be integrated with applications and systems that already exist. Thus, before moving to an off-premise cloud environment, IT executives should ensure that the cloud environment and the data are well integrated into existing systems and that the risk exposure is acceptable. There is no doubt that big data is coming and the volumes and velocity of change will only get worse as time marches on. The systems required to handle the increased influx of data may not look like those that exist in the data center today. It is conceivable that the big data and its incorporation into day-to-day operations could require an entirely new data center architecture. Business and IT executives should strategize on how to deliver on their goals and vision, and find a way to work together to transform their shops to address the new ways of conducting business and processing data while staying within budgetary constraints. 

Pages:12»