Browsing articles tagged with " optimization"

Service Delivery to Business Enablement: Data Center Edition

Apr 9, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Adam Braunstein

I have never been a fan of alarmist claims. Never have I witnessed the sky falling or the oceans abruptly swallowing masses of land. Nonetheless, we have all seen the air become unsafe to breathe in many parts of the world and rising water levels are certainly cause for concern. When rapid changes occur, those progressions do not take place overnight and often require a distanced perspective. Secondly, being paranoid does not mean one is wrong.

Such is the case with the shifts occurring in the data center. Business needs and disruptive technologies are more complex, frequent, and enduring despite their seemingly iterative nature. The gap between the deceptively calm exterior and true nature of internal data center changes threatens to leave IT executives unable to readily adapt to the seismic shifts taking place beneath the surface. Decisions made to address long-term needs are typically made using short-term metrics that mask the underlying movements themselves and the enterprise need to deal strategically with these changes. The failure to look at these issues as a whole will have a negative cascading effect on enterprise readiness in the future and is akin to France's Maginot Line of defense against Germany in World War II. While the fortifications prevented a direct attack, the tactic ignored the other strategic threats including a Belgium-based attack.

Three-Legged Stool:  Business, Technology, and Operations

The line between business and technology has blurred such that there is very little difference between the two. The old approach of using technology as a business enabler is no longer valid as IT no longer simply delivers the required business services. Business needs are now so dependent on technology that the planning and execution need to exist using same game plan, analytic tools, and measurements. Changes in one directly impact the other and continuous updates to strategic goals and tactical executions must be carefully weighed as the two move forward together. Business enablement is the new name of the game.

With business and technology successes and failures so closely fused together, it should be abundantly clear why shared goals and execution strategies are required. The new goalposts for efficient, flexible operations are defined in terms of software-defined data centers (SDDCs). Where disruptive technologies including automation, consolidation, orchestration and virtualization were previously the desired end state, SDDCs up the ante by providing logical views of platforms and infrastructures such that services can be spooled up, down, and changed dynamically without the limitations of physical constraints. While technology comprises the underpinnings here, the enablement of dynamic and changing business goals is the required outcome.

Operations practices and employee roles and skills will thus need to rapidly adapt. Metrics like data density, workload types and utilization will remain as baseline indicators but only as a means to more important measurements of agility, readiness, productivity, opportunity and revenue capture. Old technologies will need to be replaced to empower the necessary change, and those new technologies will need to be turned over at more rapid rates to continue to meet the heightened business pace as well as limited budgets. Budgeting and financial models will also need to follow suit.

The Aligned Business/IT Model of the Future: Asking the Right Questions

The fused business/IT future will need to be based around a holistic, evolving set of metrics that incorporate changing business dynamics, technology trends, and performance requirements. Hardware, software, storage, supporting infrastructure, processes, and people must all be evaluated to deliver the required views within and across data centers and into clouds. Moreover, IT executives should incorporate best-of-breed information enterprise data centers in both similar and competing industries.

The set of delivered dashboards should provide a macro view of data center operations with both business and IT outlooks and trending. Analysis should provide the following:

  • Benchmark current data center performance with comparative data;
  • Demonstrate opportunities for productivity and cost cutting improvements;
  • Provide insight as to the best and most cost effective ways to align the data center to be less complex, more scalable, and able to meet future business and technology opportunities;
  • Offer facilities to compare different scenarios as customers determine which opportunities best meet their needs.

Even though the reality of SDDCs is years away, IT executives must be travelling on the journey now. There are a number of intermediary milestones that must be achieved first and delays in reaching them will negatively impact the business. Use of data center analytical tools as described above will be needed to help chart the course and monitor progress. (The GreenWay Collaborative develops and provides tools of this nature. RFG initiated and still contributes to this effort.)

RFG POV: IT executives require a three-to-five year outlook that balances technology trends, operational best practices, and business goals. Immediate and long-range needs need to be plotted, moved, and continuously measured to mitigate immediate and long term needs. While many of these truths are evergreen, it is essential to recognize that the majority of enterprise tools and practices inadequately capture and harmonize the contributing factors. Most enterprise dashboard views evaluate data center performance at a tactical, operational level and identify opportunities for immediate performance improvements. Strategic enterprise dashboard tools tend to build on the data gathered at the tactical level and fail to incorporate evolving strategic business and technology needs. IT executives should incorporate strategic data center optimization planning tools which address the evolving business and technology needs to the mix so that IT can provide the optimum set of services to the business at each milestone. 

Blog: Data Center Optimization Planning

Dec 13, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Every organization should be performing a data center optimization planning effort at least annually. The rate of technology change and the exploding requirements for capacity demand IT shops challenge their assumptions yearly and revisit best practices to see how they can further optimize their operations. Keeping up with storage capacity requirements with flat budgets can be a challenge in that capacity is growing between 20-40 percent annually. This phenomenon is occurring across the IT landscape. Thus, if IT executives want to transform their operations from spending 70-80 percent of their budgets on operations to more than half the budget spent on development and innovation instead, executives must invest in planning that enables such change.

Optimization planning needs to cover all areas of the data center:

  • facilities,
  • finance,
  • governance,
  • IT infrastructure and systems,
  • processes, and
  • staffing.

RFG finds most companies are greatly overspending due to the inefficiencies of continuing along non-optimized paths in each of the areas; thereby providing companies with the opportunity to reduce operational expenses by more than 10 percent per year for the next decade. In fact, in some areas more than 20 percent could be shaved off.

Facilities.  At a high level, the three areas that IT executives should understand, evaluate, and monitor are facilities design and engineering, power usage effectiveness (PUE), and temperature. Most data center facilities were designed to handle the equipment of the previous century. Times and technologies have changed significantly since then and the designs and engineering assumptions and actual implementations need to be reevaluated. In a similar vein, the PUE for must data centers is far from optimized, which could be resulting in overpaying energy bills by more than 40 percent. On the "easy to fix" front, companies can raise their data center temperatures to normal room temperature or higher, with temperatures in the 80° F range being possible. Just about all equipment built today is designed to operate at temperatures greater than 100° F. For every degree raised organizations can expect to see power costs reduced by up to four percent. Additionally, facilities and IT executives can monitor their greenhouse gas (GHG) emissions, which are frequently tracked by chief sustainability officers and can be used as a measure of savings achieved by IT operational efficiency gains.

Finance.  IT costs can be reduced through use of four key factors: asset management, chargebacks, life cycle management, and procurement. RFG finds many companies are not handling asset management well, which is resulting in an overage of hardware and software being paid for annually. Studies have found this excess cost could easily run up to 20 percent of all expenses for end-user devices. The use of chargebacks better ensures IT costs are aligned with user requirements. This especially comes into play when funding external and internal support services. When it comes to life cycle management, RFG finds too many companies are retaining hardware too long. The optimal life span for servers and storage is 36-40 months. Companies that retain this equipment for longer periods can be driving up their overall costs by more than 20 percent. Moreover, the one area that IT consistently fails to understand and underperforms on is procurement. When proper procurement processes and procedures are not followed and standardized, IT can easily spend 50 percent more on hardware, software and services.

Governance.  The reason governance is a key area of focus is that governance assures performance targets are established and tracked and that an ongoing continuous improvement program is getting the attention it needs. Additionally, governance can ensure that the reasonable risk exposure levels are maintained while the transformation is ongoing.

IT infrastructure and systems.  For each of the IT components – applications, networks, servers, and storage – IT executives should be able to monitor availability, utilization levels, and virtualization levels as well as automation level. The greater the levels the fewer human resources required to support the operations and the more staffing becomes an independent variable, rather than one dependent upon the numbers and types of hardware  and software used. Companies also frequently fail to match workload types to the infrastructure most optimized to those workloads, resulting in overspend that can reach 15-30 percent of operating costs for those systems.

Processes.  The major processes that IT management should be following are application instances (especially CRM and ERP), capacity management, provisioning (and decommissioning) rates, storage tiers, and service levels. The better a company is at capacity planning (and use of clouds) the lower the cost of operations. The faster the provisioning capability the fewer human resources required to support operational changes and the likelihood of less downtime due to human error. Additionally, RFG finds the more storage tiers and automation of movement of data amongst tiers the greater the savings. As a rule of thumb organizations should find the savings as one moves from tier n to tier n+1 to be 50 percent. In addition to tiering, compression and deduplication are other approaches to storage optimization.

Staffing.  For most companies today, staffing levels are directly proportional to the number of servers, storage, network nodes, etc. The shift to virtualization and automatic orchestration of activities breaks that bond. RFG finds it is now possible for hundreds of servers to be supported by a single administrator and tens to hundreds of terabytes handled by a single database administrator. IT executives should also be looking to cross-pollinate staff so that an administrator can support and of the hardware and operating systems.

The above possibilities are what exist today. Technology is constantly improving. The gains will be even greater as time goes on, especially since the technical improvements are more exponential than linear. IT executives should be able to plug these concepts into development of a data center optimization plan and then monitor results on an ongoing basis.

RFG POV: There still remains tremendous waste in the way IT operations are run today. IT executives should be able to reduce costs by more than 40 percent, enabling them to invest more in enhancing current applications and innovation than in keeping the lights on. Moreover, IT executives should be able to cut annual costs by 10 percent per year and potentially keep 40 percent of the savings to invest in self-funding new solutions that can further improve operations.