Browsing articles in "Blog"

Wither Dell?

Feb 12, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

CEO Michael Dell is coordinating a buyout of Dell Inc. for $24.4 billion in the hopes that the company can more effectively go through its transformation if it does not have to deal with reporting results quarterly to fickle investors. Michael Dell's MSD Capital (his investment firm), has teamed with Silver Lake Partners to take the company private. Microsoft Corp. will be assisting in the buyout in the form of a $2 billion loan. If the buyout is successful – which it should be at some price – what does it portend for IT executives and commercial accounts?

To understand where Dell needs to go, one needs to first see where it is. Dell started as a low-cost PC company in the consumer market. It gradually switched to a bifurcated model – PC for consumers and PC and servers for the commercial space, primarily the public, small and medium business (SMB), and large enterprise markets. Over the past six years the company acquired 22 companies – 10 in 2012 alone – and expanded into other hardware components, software and services, including cloud services. But the company has lost its momentum. It lost PC market share and sales in 2012 faster than most of its competitors, which is disastrous for a company that derives more than half of its revenues from end-user computing solutions.

Smartphones and tablets have curtailed the growth of the traditional PC market and Dell's commercial business has not made up for the loss in end-user revenues. In fact, in both businesses Dell is considered a low-cost commodity hardware provider and not a market or thought leader. The company has not fully integrated all of its acquisitions and is struggling to reach its strategic goal of becoming a one-stop shop. The buyout gives the company time to re-think and execute a long-term strategy, reorganize and change its culture. As CEO Meg Whitman at Hewlett-Packard Co. (HP) can attest, a turnaround is a multi-year effort and doing so in public when quarterly results can be volatile is not fun. Thus, the desire by Michael Dell to go private.

While there are a number of challenges that Dell must address, there are two that will make or break the success of the new corporate strategy. The vendor must either exit the end-user computing market or once again become a market leader. It is lacking products in the key current and future end-user markets and it cannot regain its position with just PC solutions to hawk. Secondly, Dell has not been able to transition from a culture of transaction selling to one of relationship sales. If the vendor is to become one of the top one-stop providers in the commercial space, it will have to invest in customer relationship management. This is a massive cultural change that goes to the core of the company. HP has struggled with the clash of this cultural divide since it acquired Compaq in 2002. IBM Corp. took more than 10 years to change its culture. The underlying question will be whether or not CEO Dell, by trade a transactional salesman, can lead the culture shift to succeed with its new corporate vision.

In addition to the above challenges, there are a number of other key issues to be resolved. IT executive relationships with Dell depend on how these shake out.

Assets.  Dell will need to decide which assets it has today are worth keeping and which are to be shed. In strong customer relationship management organizations, people are a primary asset. Will Dell address this? Additionally, once it has its strategic vision in place, what additional acquisitions are needed to complete the puzzle? Will the new Dell have the funds to acquire the companies it needs or will the buyout end up choking the firm's ability to compete effectively? Dell recently moved into the equipment leasing space. Will it have the wherewithal to remain?

Business Model. What will Dell's new business model be? It will have to compete with HP, IBM and Oracle Corp. – all of whom are innovators, bring more than commodity products and services to the table, and want to own the complete business relationship with their customers. Each has a different business model. Where will the new Dell position itself?

Business Partners and Channels. Dell will have to re-evaluate how it works with business partners and uses various sales and distribution channels. Dell does have a cloud presence but can it leverage it the way Apple Inc. or Google Inc. do? Can it be a full service provider and still utilize business partners and channels effectively? Without strong business partners and channels Dell will not be able to compete effectively.

Microsoft. Microsoft did not become an owner but a lender to Dell. This will cost the company more than just money. Will it restrict the vendor from providing certain products or solutions?

Processes. Dell needs to revamp its development, operations, and sales processes to be fully integrated and customer relationship based. The customer must come first; not the products or services. This will be a long-term change, which may be agonizing at times.

Technology. Today Dell assembles some products and has the intellectual property (IP) for those products and services that the company acquired. Can it leverage the IP and become recognized as an innovator or will the IP assets wither and the talent depart? Over the past year Dell has been bringing on board the resources to take advantage of the assets. Will the new Dell continue down the same path? If Dell stays in the end-user computing space, will it be able to figure out how to do mobility and social (key components to staying competitive)? If not, will it bite the bullet and exit the business?

The company was at one time the leader in the PC arena. Then it became one of the top players. Now it wants to be a leader in the full-service enterprise space where it is not a top player and is losing momentum.

RFG POV: Dell has a long, tough transformation ahead. By going private it will no longer have to worry about the stock market price but will still have to answer to investors. RFG does not expect the company to pull out of any markets in the near term – although the printing and peripherals business is exposed – but a number of the executives and employees whose visions are out of sync with new direction will depart. In the full-service enterprise space Dell will have to be more than a low-cost provider. It must become a hardware, software, and services innovator, determine its positioning vis-à-vis competitors, make additional acquisitions to fill in the gaps, and spend time and resources building relationships that may not yield near-term revenues. Whether or not the stakeholders will allow the company to spend enough money and time to make the conversion is an open question. The fallback position may be to go back to being a low-cost or custom commodity provider to the commercial market.  Moreover, Dell will have to invest in a new end-user computing model, watch its market share shrivel, or quit the space. One thing is for sure – it cannot be all things to all players and must pick its choices carefully. Dell must articulate its strategy to business partners, customers, and employees over the next three to six months or loyalty may falter. In any event, IT executives should expect Dell to provide support and a smooth transition for businesses that are divested, restructured, or sold. IT executives desirous of using Dell as a strategic provider should continue to work closely with Dell, keep abreast of its strategy and roadmaps and factor the knowledge into the corporate decision-making process. Additionally, IT executives should not be surprised or concerned to find the company fails to make the short-list of candidates. There are plenty of options these days.

 

HP Cloud Services, Cloud Pricing and SLAs

Jan 9, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Hewlett-Packard Co. (HP) announced the HP Cloud Compute made generally available in Dec. 2012 while the HP Cloud Block Storage cloud entered beta at that time. HP claims its Cloud Compute has an industry leading availability service level agreement (SLA) of 99.95 percent. Amazon Inc.'s S3 and Microsoft Corp.'s Windows Azure clouds reduced their storage pricing.

Focal Points:

  • HP released word that the HP Cloud Compute moved to general availability on Dec. 5, 2012 and will offer a 99.95 percent monthly SLA (a maximum of 22 minutes of downtime per month). The company extended the 50 percent discount on pricing until January. The HP Compute cloud is designed to allow businesses of all sizes to move their production workloads to the cloud. There will be three separate availability zones (AZs) per region. It supports Linux and Windows operating systems and comes in six different instance sizes, with prices starting at $0.04/hour. HP is currently supporting Fedora, Debian, CentOS, and Ubuntu Linuxes, but not Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES). On the Windows side, HP is live with Windows Server 2008 SP2 and R2 while Windows Server 2012 is in the works. There are sites today on the East and West coasts of the U.S. with a European facility operational in 2013. Interestingly, HP built its cloud using ProLiant servers running OpenStack and not CloudSystem servers. Meanwhile, HP's Cloud Block Storage moved to public beta on Dec. 5, 2012; customers will not be charged until January at which time pricing will be discounted by 50 percent. Users can create custom storage volumes from 1 GB to 2 TB. HP claims high availability for this service as well and claims each storage volume automatically is replicated within the same availability zone.
  • Amazon is dropping its S3 storage pricing by approximately 25 percent. The first TB/month goes from $0.125 per GB/month to $0.095 per GB/month, a 24 percent reduction. The next 49 TB prices per GB/month fall to $0.080 from $0.110 while the next 450 TB drops from $0.095 to $0.070. This brings Amazon's pricing in line with Google Inc.'s storage pricing. According to an Amazon executive S3 stores well over a trillion objects and services 800,000 requests a second. Prices have been cut 23 times since the service was launched in 2006.
  • In reaction to Amazon's actions Microsoft's Windows Azure storage pricing has again been reduced by up to 28 percent to remain competitive. In March 2012 Azure lowered its storage pricing by 12 percent. Geo-redundant storage has more than 400 miles of separation between replicas and is the default storage mode.

 Google GB/Mo

 Google Storage pricing

 Amazon S3 pricing Amazon GB/mo   Azure storage pricing - geo-redundant

 Azure storage pricing - local-redundant

 First TB

 $0.095

$0.095

 First TB

 $0.095

$0.070

 Next 9 TB

 $0.085

 $0.080

Next 49 TB 

 $0.080

 $0.065

 Next 90 TB

 $0.075

 

 
 Next 400 TB

 $0.070

     

Source: The Register

RFG POV: HP's Cloud Compute offering for production systems is most notable for its 99.95 percent monthly SLA. Most cloud SLAs are hard to understand, vague and contain a number of escape clauses for the provider. For example, Amazon's EC2 SLA guarantees 99.95 percent availability of the service within a region over a trailing 365 day period – i.e., downtime is not to exceed 250 minutes (more than four hours) over the year period. There is no greater granularity, which means one could encounter a four hour outage in a month and the vendor would still not violate the SLA. HP's appears to be stricter; however, in a NetworkWorld articleHP's SLA only applies if customers cannot access any AZs, according to Gartner analyst Lydia Leong. That means customers have to potentially architect their applications to span three or more AZs, each one imposing additional costs on the business. "Amazon's SLA gives enterprises heartburn. HP had the opportunity to do significantly better here, and hasn't. To me, it's a toss-up which SLA is worse," Leong writes. RFG spoke with HP and found its SLA is much better than portrayed in the article. The SLA, it seems, is poorly written so that Leong's interpretation is reasonable (and matches what Amazon requires). However, to obtain credit HP does not require users run their application in multiple AZs – just one, but they must minimally try to run the application in another AZ in the region if the customer's instance becomes inaccessible. The HP Cloud Compute is not a perfect match for mission-critical applications but there are a number of business-critical applications that could take advantage of the HP service. For the record, RFG notes Oracle Corp.'s cloud hosting SLAs are much worse than either Amazon's or HP's. Oracle only offers an SLA of 99.5 percent per calendar month – the equivalent of 2500 minutes or more than 40 hours of outage per month NOT including planned downtime and certain other considerations. IT executives should always scrutinize the cloud provider's SLAs and ensure they are acceptable for the service for which they will be used. In RFG's opinion Oracle's SLAs are not acceptable at all and should be renegotiated or the platform should be removed from consideration. On the cloud storage front overall prices continue to drop 10 percent or more per year. The greater price decreases are due to the rapid growth of storage (greater than 30 percent per year) and the predominance of newer storage arrays versus older ones. IT executives should be considering these prices as benchmarks and working to keep internal storage costs on a similar declining scale. This will require IT executives to retain storage arrays four years or less, and employing tiering and thin provisioning. Those IT executives that believe keeping ancient spinning iron on the data center floor to be the least cost option will be unable to remain competitive against cloud offerings, which could impair the trust relationship with business and finance executives.

Mainframe Survey – Future is Bright

Jan 9, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

According to the 2012 BMC Software Inc. survey of mainframe users, the mainframe continues to be their platform of choice due to its superior availability, security, centralized data serving and performance capabilities. It will continue to be a critical business tool that will grow driven by the velocity, volume, and variety of applications and data.

Focal Points:

  • According to 90 percent of the 1,243 survey respondents the mainframe is considered to be a long-term solution, and 50 percent of all respondents agreed it will attract new workloads. Asia-Pacific users reported the strongest outlook, as 57 percent expect to rely on the mainframe for new workloads. The top three IT priorities for respondents were keeping IT costs down, disaster recovery, and application modernization.  The top priority, keeping costs down, was identified by 69 percent of those surveyed, up from 60 percent from 2011. Disaster recovery was unchanged at 34 percent while application modernization was selected by 30 percent, virtually unchanged as well. Although availability is considered a top benefit of the mainframe, 39 percent of respondents reported an unplanned outage; however, only 10 percent of organizations stated they experienced any impact from an outage. The primary causes of outages were hardware failures (31 percent), system software failure (30 percent), in-house application failure (28 percent), and change process failure (22 percent).
  • 59 percent of respondents expect MIPS capacity to grow as they modernize and add applications to address business needs. The top four factors for continued investment in the mainframe were platform availability advantage (74 percent), security strengths (7o percent), superior centralized data server (68 percent), and transaction throughput requirements best suited to a mainframe (65 percent). Only 29 percent felt that the costs of migration were too high or use of alternative solutions did not have a reasonable return on investment (ROI), up from 26 percent the previous two years.
  • There remains a continued concern about the shortage of skilled mainframe staff. Only about a third of respondents were very concerned about the skills issues, although at least 75 percent of those surveyed expressed some level of concern. The top methods being used to address the skills shortage are training internally (53 percent), hire experienced staff (40 percent), outsource (37 percent) and automation (29 percent). Additionally, more than half of the respondents stated the mainframe must be incorporated into the enterprise management processes. Enterprises are recognizing the growing complexity of the hybrid data center and the need for simple, cross-platform solutions.

RFG POV: Some things never change – mainframes still are predominant in certain sectors and will continue to be so over the visible horizon, and yet the staffing challenges linger. 20 years after mainframes were declared dinosaurs they remain valuable platforms and growing. In fact, mainframes can be the best choice for certain applications and data serving, as they effectively and efficiently deal with the variety, velocity, veracity, volume, and vulnerability of applications and data while reducing complexity and cost. RFG's latest study on System z as the lowest cost database server (http://lnkd.in/ajiUrY ) shows the use of the mainframe can cut the costs of IT operations around 50 percent. However, with Baby Boomers becoming eligible for retirement, there is a greater concern and need for IT executives to utilize more automated, self-learning software and implement better recruitment, training and outsourcing programs. IT executives should evaluate mainframes as the target server platform for clouds, secure data serving, and other environments where zEnterprise's heterogeneous server ecosystem can be used to share data from a single source, and optimize capacity and performance at a low-cost.

California – Gone Too Far Again

Dec 13, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

California Governor Jerry Brown signed into laws Assembly Bill (AB) 1844, which restricts employers' access to employees' social media accounts, and Senate Bill (SB) 1349, which restricts schools' access to students' social media accounts. Due to the overbroad nature of the laws and the definition of social media, enterprises and schools may have difficulty complying while performing their fiduciary responsibilities.

Focal Points:

  • Although both laws expressly claim they are only regulating "social media," the definitions used in the laws goes well beyond true social media over the Internet. The statutes use the following definition: "social media" means an electronic service or account, or electronic content, including, but not limited to, videos, still photographs, blogs, video blogs, podcasts, instant and text messages, email, online services or accounts, or Internet Web site profiles or locations. In effect, the law governs all digital content and activity – whether it is over the Internet and/or stored in local storage devices on in-house systems.
  • Additionally, AB 1844, which covers employer-employee relationships, restricts employers' access to "personal social media" while allowing business-related access. However, the law does not define what comprises business or personal social media. It assumes that these classifications are mutually exclusive, which is not always the case. There have been multiple lawsuits over the years that have resulted from disagreements between the parties as to the classification of certain emails, files, and other social media.
  • Many organizations inform employees that email and social media activity performed while using the organization's computer systems is open to access and review by the company. Furthermore, some entities have employees sign an annual agreement to such rights. However, the law makes it illegal for employers to ask for login credentials to "personal" accounts and the statute does not allow access to mixed accounts, which supposedly do not exist.

RFG POV: The new California statutes are reminiscent of CA Senate Bill 1386 (SB 1386), which requires any state agency or entity that holds personal information of customers living in the state to divulge any infringement of databases that include personal information, regardless of the business' geographic location. The new laws do more harm than good and allow potential class action civil suits in addition to individual suits. This will make it more difficult for organizations to protect the entity, its image, enterprise data and client/student relationships, and ensure appropriate conduct guidelines and privacy requirements are being met. In addition, the ambiguities in the wording of the laws leave them open to interpretation, which in turn will eventually lead to lawsuits. Business and IT executives can expect these new laws to extend beyond the borders of the state of California, as did SB 1386. IT executives should review the legislation, discuss with legal advisors all elements of the laws, including the definitions, and explore ways to be proactive with their governance, guidelines and processes to prevent worst case scenarios from occurring.

Blog: Data Center Optimization Planning

Dec 13, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Every organization should be performing a data center optimization planning effort at least annually. The rate of technology change and the exploding requirements for capacity demand IT shops challenge their assumptions yearly and revisit best practices to see how they can further optimize their operations. Keeping up with storage capacity requirements with flat budgets can be a challenge in that capacity is growing between 20-40 percent annually. This phenomenon is occurring across the IT landscape. Thus, if IT executives want to transform their operations from spending 70-80 percent of their budgets on operations to more than half the budget spent on development and innovation instead, executives must invest in planning that enables such change.

Optimization planning needs to cover all areas of the data center:

  • facilities,
  • finance,
  • governance,
  • IT infrastructure and systems,
  • processes, and
  • staffing.

RFG finds most companies are greatly overspending due to the inefficiencies of continuing along non-optimized paths in each of the areas; thereby providing companies with the opportunity to reduce operational expenses by more than 10 percent per year for the next decade. In fact, in some areas more than 20 percent could be shaved off.

Facilities.  At a high level, the three areas that IT executives should understand, evaluate, and monitor are facilities design and engineering, power usage effectiveness (PUE), and temperature. Most data center facilities were designed to handle the equipment of the previous century. Times and technologies have changed significantly since then and the designs and engineering assumptions and actual implementations need to be reevaluated. In a similar vein, the PUE for must data centers is far from optimized, which could be resulting in overpaying energy bills by more than 40 percent. On the "easy to fix" front, companies can raise their data center temperatures to normal room temperature or higher, with temperatures in the 80° F range being possible. Just about all equipment built today is designed to operate at temperatures greater than 100° F. For every degree raised organizations can expect to see power costs reduced by up to four percent. Additionally, facilities and IT executives can monitor their greenhouse gas (GHG) emissions, which are frequently tracked by chief sustainability officers and can be used as a measure of savings achieved by IT operational efficiency gains.

Finance.  IT costs can be reduced through use of four key factors: asset management, chargebacks, life cycle management, and procurement. RFG finds many companies are not handling asset management well, which is resulting in an overage of hardware and software being paid for annually. Studies have found this excess cost could easily run up to 20 percent of all expenses for end-user devices. The use of chargebacks better ensures IT costs are aligned with user requirements. This especially comes into play when funding external and internal support services. When it comes to life cycle management, RFG finds too many companies are retaining hardware too long. The optimal life span for servers and storage is 36-40 months. Companies that retain this equipment for longer periods can be driving up their overall costs by more than 20 percent. Moreover, the one area that IT consistently fails to understand and underperforms on is procurement. When proper procurement processes and procedures are not followed and standardized, IT can easily spend 50 percent more on hardware, software and services.

Governance.  The reason governance is a key area of focus is that governance assures performance targets are established and tracked and that an ongoing continuous improvement program is getting the attention it needs. Additionally, governance can ensure that the reasonable risk exposure levels are maintained while the transformation is ongoing.

IT infrastructure and systems.  For each of the IT components – applications, networks, servers, and storage – IT executives should be able to monitor availability, utilization levels, and virtualization levels as well as automation level. The greater the levels the fewer human resources required to support the operations and the more staffing becomes an independent variable, rather than one dependent upon the numbers and types of hardware  and software used. Companies also frequently fail to match workload types to the infrastructure most optimized to those workloads, resulting in overspend that can reach 15-30 percent of operating costs for those systems.

Processes.  The major processes that IT management should be following are application instances (especially CRM and ERP), capacity management, provisioning (and decommissioning) rates, storage tiers, and service levels. The better a company is at capacity planning (and use of clouds) the lower the cost of operations. The faster the provisioning capability the fewer human resources required to support operational changes and the likelihood of less downtime due to human error. Additionally, RFG finds the more storage tiers and automation of movement of data amongst tiers the greater the savings. As a rule of thumb organizations should find the savings as one moves from tier n to tier n+1 to be 50 percent. In addition to tiering, compression and deduplication are other approaches to storage optimization.

Staffing.  For most companies today, staffing levels are directly proportional to the number of servers, storage, network nodes, etc. The shift to virtualization and automatic orchestration of activities breaks that bond. RFG finds it is now possible for hundreds of servers to be supported by a single administrator and tens to hundreds of terabytes handled by a single database administrator. IT executives should also be looking to cross-pollinate staff so that an administrator can support and of the hardware and operating systems.

The above possibilities are what exist today. Technology is constantly improving. The gains will be even greater as time goes on, especially since the technical improvements are more exponential than linear. IT executives should be able to plug these concepts into development of a data center optimization plan and then monitor results on an ongoing basis.

RFG POV: There still remains tremendous waste in the way IT operations are run today. IT executives should be able to reduce costs by more than 40 percent, enabling them to invest more in enhancing current applications and innovation than in keeping the lights on. Moreover, IT executives should be able to cut annual costs by 10 percent per year and potentially keep 40 percent of the savings to invest in self-funding new solutions that can further improve operations. 

Blog: Green Data Centers an Oxymoron

Nov 30, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

The New York Times published "Power, Pollution and the Internet," an article on the dark side of data centers. The report, which was the result of a yearlong investigation, highlights the facts related to the environmental waste and inefficiencies that can be found in the vast majority of data centers around the world. RFG does not contest the facts as presented in the article but the Times failed to fully recognize all the causes that led to today's environment and the use of poor processes and practices. Therefore, it can only be partially fixed – cloud computing notwithstanding – until there is a true transformation in culture and mindset.

New York Times Article

The New York Times enumerated the following energy-related facts about data centers:

  • Most data centers, by design, consume vast amounts of energy
  • Online companies run their facilities 24x7 at maximum capacity regardless of demand
  • Data centers waste 90 percent or more of the electricity they consume
  • Worldwide digital warehouses use about 30 billion watts of energy; U.S. accounts for 25 to 33 percent of the load
  • McKinsey & Company found servers use only six to 12 percent of their power consumption on real work, on average; the rest of the time the servers are idle or in standby mode
  • International Data Corp. (IDC) estimates there are now more than three million data centers of varying sizes worldwide
  • U.S. data centers use about 76 billion kWh in 2010, or roughly two percent of all electricity used in the country that year, according to a study by Jonathan G. Koomey.
  • A study by Viridity Software Inc. found in one case where of 333 servers monitored, more than half were "comatose" – i.e., plugged in, using energy, but doing little if any work. Overall, the company found nearly 75 percent of all servers sampled had a utilization of less than 10 percent.
  • IT's low utilization "original sin" was the result of relying on software operating systems that crashed too much. Therefore, each system seldom ran more than one application and was always left on.
  • McKinsey's 2012 study currently finds servers run at six to 12 percent utilization, only slightly better than the 2008 results. Gartner Group also finds the typical utilization rates to be in the seven to 12 percent range.
  • In a typical data center when all power losses are included – infrastructure and IT systems – and combined with the low utilization rates, the energy wasted can be as much as 30 times the amount of electricity used for data processing.
  • In contrast the National Energy Research Scientific Computing Center (NERSCC), which uses server clusters and mainframes at the Lawrence Berkeley National Laboratory (LBNL), ran at 96.4 percent utilization in July.
  • Data centers must have spare capacity and backup so that they can handle traffic surges and provide high levels of availability. IT staff get bonuses for 99.999 percent availability, not for savings on the electric bill, according to an official at the Electric Power Research Institute.
  • In the Virginia area data centers now consume 500 million watts of electricity and projections are that this will grow to one billion over the next five years.
  • Some believe the use of clouds and virtualization may be a solution to this problem; however, other experts disagree.

Facts, Trends and Missed Opportunities

There are two headliners in the article that are buried deep within the text. The "original sin" was not relying on buggy software as stated. The issue is much deeper than that and it was a critical inflection point. And to prove the point the author states the NERSCC obtains utilization rates of 96.4 percent in July with mainframes and server clusters. Hence, the real story is that mainframes are a more energy efficient solution and the default option of putting workloads on distributed servers is not a best practice from a sustainability perspective.

In the 1990s the client server providers and their supporters convinced business and IT executives that the mainframe was dead and that the better solution was the client server generation of distributed processing. The theory was that hardware is cheap but people costs are expensive and therefore, the development productivity gains outweighed the operational flaws within the distributed environment. The mantra was unrelenting over the decade of the 90s and the myth took hold. Over time the story evolved to include the current x86-architected server environment and its operating systems. But now it is turning out that the theory – never verified factually – is falling apart and the quick reference to the 96.4 percent utilization achieved by using mainframes and clusters exposes the myth.

Let's take the key NY Times talking points individually.

  • Data centers do and will consume vast amounts of energy but the curve is bending downward
  • Companies are beginning to learn to not run their facilities at less than maximum capacity. This change is relatively new and there is a long way to go.
  • Newer technologies – hardware, software and cloud – will enable data centers to reduce waste to less than 20 percent. The average data center today more than half of their power consumption on non-IT infrastructure. This can be reduced drastically. Moreover, as the NERSCC shows, it is possible to drive utilization to greater than 90 percent.
  • The multiple data points that found the average server utilization to be in the six to 12 percent range demonstrated the poor utilization enterprises are getting from Unix and Intel servers. Where virtualization has been employed, the utilization rates are up but they still remain less than 30 percent on average. On the other hand, mainframes tend to operate at the 80 to 100 percent utilization level. Moreover, mainframes allow for shared data whereas distributed systems utilize a shared-nothing data model. This means more copies of data on more storage devices which means more energy consumption and inefficient processes.
  • Comatose servers are a distributed processing phenomenon, mostly with Intel servers. Asset management of the huge server farms created by the use of low-cost, single application, scale-out hardware is problematic. The complexity caused by the need for orchestration of the farms has hindered management from effectively managing the data center complex. New tools are constantly coming on board but change is occurring faster than the tools can be applied. As long as massive single-application server farms exist, the problem will remain.
  • Power losses can be reduced from 30 times that used to less than 1.5 times.
  • The NERSCC utilization achievement would not be possible without mainframes.
  • Over the next five years enterprises will learn how to reduce the spare capacity and backup capabilities of their data centers and rely upon cloud services to handle traffic surges and some of their backup/disaster recovery needs.
  • Most data center staffs are not measured on power usage as most shops do not allocate those costs to the IT budget. Energy consumption is usually charged to facilities departments.
  • If many of the above steps occur, plus use of other processes such as the lease-refresh-scale-up delivery model (vs the buy-hold-scale-out model) and the standardized operations platform model (vs development selected platform model), then the energy growth curve will be greatly abated, and could potentially end up using less power over time.

Operations standard platforms (cloud)

Greater standardization and reduced platform sprawl but more underutilized systems

Least cost

Development selected platforms

Most expensive

Greater technical currency with platform islands and sprawl

Model philosophies

Buy-hold-scale-out

 

 

Lease-refresh-scale-up

 

  •  Clouds and virtualization will be one solution to the problem but more is needed, as discussed above.

RFG POV: The mainframe myths have persisted too long and have led to greater complexity, higher data center costs, inefficiencies, and sub-optimization. RFG studies have found that had enterprises kept their data on the mainframe while applications were shifted to other platforms, companies would be far better off than they are today. Savings of up to 50 percent are possible. With future environments evolving to processing and storage nodes connected over multiple networks, it is logical to use zEnterprise solutions to simplify the data environment. IT executives should consider mainframe-architected solutions as one of their targeted environments as well as an approach to private clouds. Moreover, IT executives should discuss the shift to a lease-refresh-scale-up approach with their financial peers to see if and how it might work in their shops.

CIO Ceiling, Social Success and Exposures

Nov 30, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

According to a Gartner Inc. survey, CIOs are not valued as much as other senior executives and most will have hit a glass ceiling. Meanwhile a Spredfast Inc. social engagement index benchmark report finds a brand’s level of social engagement is more influenced by its commitment to social business than its size. In other news, a New York judge forced Twitter Inc. to turn over tweets from one of its users.

Focal Points:

  • Recent Gartner research of more than 200 CEOs globally finds CIOs have a great opportunity to lead innovation in their organization, but they are not valued as strategic advisors by their CEOs, most of whom think they will leave the enterprise. Only five percent of CEOs rated their CIOs as a close strategic advisor while CFOs scored a 60 percent rating and COOs achieved a 40 percent rating. When it comes to innovation, CIOs fared little better – with five percent of CEOs saying IT executives were responsible for managing innovation. Gartner also asked the survey participants where they thought their CIO's future career would lead. Only 18 percent of respondents said they could see them as a future business leader within the organization, while around 40 percent replied that they would stay in the same industry, but at a different firm.
  • Spredfest gathered data from 154 companies and developed a social engagement index benchmark report that highlights key social media trends across the brand and assesses the success of social media programs against their peers. The vendor categorized companies into three distinct segments with similar levels of internal and external engagement: Activating, Expanding, and Proliferating. Amongst the findings was that a brand's level of social engagement is more influenced by its commitment to social business than its size. Social media is also no longer one person's job but averages about 29 people participating in social programs across 11 business groups and 51 social accounts. Publishing is heavier on Twitter but engagement is higher on Facebook, Inc. but what works best for a brand does depend on industry and audience. Another key point was that corporate social programs are multi-channel, requiring employees to participate in multiple roles. Additionally, users expect more high-quality content and segmented groups. One shortfall the company pointed out was that companies use social media as an opportunity for brand awareness and reputation but miss the opportunity to convert the exchange into subsequent actions and business.
  • Under protest Twitter surrendered the tweets of an Occupy Wall Street protester, Malcolm Harris, to a Manhattan judge rather than face contempt of court. The case became a media sensation after Twitter notified Harris about prosecutors' demands for his account. Mr. Harris challenged the demand but the judge ruled that he had no standing because the tweets did not belong to him. While the tweets are public statements, Mr. Harris had deleted them. Twitter asserts that users own their tweets and that the ruling is in error. Twitter claims there are two open questions with the ruling: are tweets public documents and who owns them. Twitter is appealing.

RFG POV: For the most part CIOs and senior IT executives have yet to bridge the gap from technologist to strategist and business advisor. One implication here is that IT executives still are unable to understand the business so that IT efforts are aligned with the business and corporate needs. To quote an ex-CIO at Kellogg's when asked what his role is said, "I sell cereal." Most IT executives do not think that way but need to. Until they do, they will not become strategic advisors, gain a seat at the table or have an opportunity to move up and beyond IT.  The Spredfest report shows that using social media has matured and requires attention like any other corporate function. Moreover, to get it to have a decent payback companies have to dedicate resources to keeping the content current and of high quality and to getting users to interact with the company. Thus, social media is no longer just an add-on but must be integrated with business plans and processes. IT executives should play a role in getting users to understand how to utilize social media tools and collaboration so that the enterprise optimizes its returns. The Twitter tale is enlightening in that information posted publicly may not be recalled (if the ruling holds) and can be used in court. RFG has personal experience with that. Years ago, in a dispute with WorldCom, RFG claimed the rates published on its Web site were valid at the time published. The telecom vendor claimed its new posting were applicable and had removed the older rates. When RFG was able to produce the original rate postings, WorldCom backed down. IT executives are finding a number of vendors are writing contracts with terms not written in the contract but posted online. This is an advantage to the vendors and a moving target for users. IT executives should negotiate contracts that have terms and conditions locked in and not changeable at the whim of the vendor. Additionally, enterprises should train staff on how to be careful about is posted in external social media. It can cost people their jobs as well as damage the company's financials and reputation. 

More Risk Exposures

Nov 30, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Hackers leaked more than one million user account records from over 100 websites, including those of banks and government agencies. Moreover, critical zero-day flaws were found in recently-patched Java code and a SCADA software vendor was charged with having default insecurity, including a hidden factory account with password. Meanwhile, millions of websites hosted by world's largest domain registrar, GoDaddy.com LLC, were knocked offline for a day.

Focal Points:

  • The hacker group, Team GhostShell, raided more than 100 websites and leaked a cache of more than one million user account records. Although the numbers claimed have not been verified, security firm Imperva noted that some breached databases contained more than 30,000 records. Victims of the attack included banks, consulting firms, government agencies, and manufacturing firms. Prominent amongst the data stolen from the banks were personal credit histories and current standing. A large portion of the pilfered files comes from content management systems (CMS), which likely indicates that the hackers exploited the same CMS flaw at multiple websites. Also taken were usernames and passwords. Per Imperva "the passwords show the usual "123456" problem.  However, one law firm implemented an interesting password system where the root password, "law321" was pre-pended with your initials.  So if your name is Mickey Mouse, your password is "mmlaw321".   Worse, the law firm didn't require users to change the password.  Jeenyus!" The group threatened to carry out further attacks and leak more sensitive data.
  • A critical Java security vulnerability that popped up at the end of August leverages two zero-day flaws. Moreover, the revelation comes with news that Oracle knew about the holes as early as April 2012. Microsoft Corp. Windows, Apple Inc. Mac OS X and Linux desktops running multiple browser platforms are all vulnerable to attacks. The exploit code first uses a vulnerability to gain access to the restricted sun.awt.SunToolkit class before a second bug is used to disable the SecurityManager, and ultimately to break out of the Java sandbox. Those that have left unpatched the vulnerabilities to the so-called Gondvv exploit that was introduced in the July 2011 Java 7.0 release are at risk since all versions of Java 7 are vulnerable. Notably older Java 6 versions appear to be immune. Oracle Corp. has yet to issue an advisory on the problem but is studying it; for now the best protection is to disable or uninstall Java in Web browsers. SafeNet Inc. has tagged a SCADA maker for default insecurity. The firm uncovered a hidden factory account, complete with hard-coded password, in switch management software made by Belden-owned GarrettCom Inc. The Department of Homeland Security's (DHS) ICS-CERT advisory states the vendor's Magnum MNS-6K management application allows an attacker to gain administrative privileges over the application and thereby access to the SCADA switches it manages. The DHS advisory also notes a patch was issued in May that would remove the vulnerability; however, the patch notice did not document the change. The vendor claims 75 of the top 100 power companies as customers.
  • GoDaddy has stated the daylong DNS outage that downed many of its customers' websites was not caused by a hacker (as claimed by the supposed perpetrator), but that the service interruption was not the result of a DDoS attack at all. Instead the provider claims the downtime was caused by "a series of network events that corrupted router tables." The firm says that it has since corrected the elements that triggered the outage and has implemented measures to prevent a similar event from happening again. Customer websites were inaccessible for six hours. GoDaddy claims to have as many as 52 million websites registered but has not disclosed how many of the sites were affected by the outage.

RFG POV: Risk management must be a mandatory part of the process for Web and operational technology (OT) appliances and portals. User requirements come from more places than the user department that requested the functionality; it also comes from areas such as audit, legal, risk and security. IT should always be ensuring their inputs and requirements are met. Unfortunately this "flaw" has been an IT shortfall for decades and it seems new generations keep perpetuating the shortcomings of the past. As to the SCADA bugs, RFG notes that not all utilities are current with the Federal Energy Regulatory Commission (FERC) cyber security requirements or updates, which is a major U.S. exposure. IT executives should be looking to automate the update process so that utility risk exposures are minimized. The GoDaddy outage is one of those unfortunate human errors that will occur regardless of the quality of the processes in place. But it is a reminder that cloud computing brings with it its own risks, which must be probed and evaluated before making a final decision. Unlike internal outages where IT has control and the ability to fix the problem, users are at the discretion of outsourced sites and the terms and conditions of the contract they signed. In this case GoDaddy not only apologized to its users but offered customers 30 percent across-the-board discounts as part of their apology. Not many providers are so generous. IT executives and procurement staff should look into how vendors responded to their past failures and then ensure the contracts protect them before committing to use such services. 

The HP, Oracle, SAP Dance

Aug 29, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Hewlett-Packard Co. announces reorganization and write-downs and gets good news from the courts that it has won its Intel Corp. Itanium lawsuit against Oracle Corp. Oracle must now port its software to Itanium-based servers. In other news, Oracle agreed to a $306 million settlement from SAP AG over their copyright infringement suit. However, the soap opera is not over – Oracle may still push for more.

Focal Points:

  • CEO Meg Whitman, in her continued attempt to turn the company around, is writing down the value of its Enterprise Services business by $8 billion and making management changes. HP paid $13.9 billion to acquire EDS back in 2008.  John Visentin, whom former HP CEO Leo Apotheker anointed to manage the Enterprise Services behemoth a year ago, is leaving the company.  Mike Nefkens, who runs Enterprise Services in the EMEA region, will head the global Enterprise Services group, which is responsible for HP's consulting, outsourcing, application hosting, business process outsourcing, and related services operations. Nefkens, who came from EDS, will report to the CEO but has been given the job on an "acting basis" so more changes lie ahead. In addition, Jean-Jacques Charhon, CFO for Enterprise Services, has been promoted to the COO position and will "focus on increasing customer satisfaction and improving service delivery efficiency, which will help drive profitable growth." HP services sales have barely exceeded one percent growth in the previous two fiscal years. HP further states the goodwill impairment will not impact its cash or the ongoing services business. The company also said its workforce reduction plan, announced earlier this year to eliminate about 27,000 people from its 349,600-strong global workforce, was proceeding ahead of schedule. However, since more employees have accepted the severance offer than expected, HP is increasing the restructuring charge from $1.0 billion to the $1.5-1.7 billion range. On the positive front, HP raised its third-quarter earnings forecast.
  • HP received excellent news from the Superior Court of the State of California when it ruled the contract between HP and Oracle required Oracle to port its software products to HP's Itanium-based servers. HP won on five different counts: 1) Oracle was in breach of contract; 2) the Settlement and Release Agreement entered into by HP, Oracle and Mark Hurd on September 20, 2010, requires Oracle to continue to offer its product suite on HP's Itanium-based server platforms and does not confer on Oracle the discretion to decide whether to do so or not; 3) the terms "product suite" means all Oracle software products that were offered on HP's Itanium-based servers at the time Oracle signed the settlement agreement, including any new releases, versions or updates of those products; 4) Oracle's obligation to continue to offer its products on HP's Itanium-based server platforms lasts until such time as HP discontinues the sales of its Itanium-based servers; and 5) Oracle is required to port its products to HP's Itanium-based servers without charge to HP. Oracle is expected to comply.
  • Oracle said it agreed to accept damages of $306 million settlement from German rival SAP to shortcut the appeals process in the TomorrowNow copyright infringement lawsuit. Oracle sued SAP back in 2007 when it claimed SAP's TomorrowNow subsidiary illegally downloaded Oracle software and support documents in an effort to pilfer Oracle customers. SAP eventually admitted wrongdoing and shut down the maintenance subsidiary. In November 2010, Oracle had originally won a $1.3 billion damages settlement, the largest ever awarded by a copyright jury but it was thrown out by the judge, who said Oracle could have $272 million or could ask for a retrial. To prevent another round of full-blown trial costs, the warring technology giants have agreed to the $306 million settlement plus Oracle's legal fees of $120 million; however, Oracle can now ask the appeals court judges to reinstate the $1.3 billion award. SAP stated the settlement is reasonable and the case has dragged on long enough.

RFG POV: HP suffers from its legacy product culture and continues to struggle to integrate services into a cohesive sales strategy. The company does well with the low-level technical services such as outsourcing but has not been able to shift to the higher margin, strategic consulting services. While the asset write-down was for the EDS acquisition, HP had its own consulting services organization (C&I) that it merged with EDS and atrophied. It took IBM Corp. more than 10 years to effectively bring its products and services sales groups together (it is still a work in progress). RFG therefore thinks it will take HP even longer before it can remake its culture to bring Enterprise Services to the level Meg Whitman desires. The HP Itanium win over Oracle should remove a dark cloud from the Integrity server line but a lot of damage has already been done. HP now has an uphill battle to restore trust and build revenues. IT executives interested in HP's Unix line combined with Oracle software should ensure that the desired software has been or will be ported by the time the enterprise needs it installed. The Oracle SAP saga just will not go away, as it is likely CEO Larry Ellison enjoys applying legal pressure to SAP (especially since the fees will be paid by the other party). It is a distraction for SAP executives but does not impair ongoing business strategies or plans. Nor will the outcome prevent other third parties from legally offering maintenance services. IT executives should not feel bound to use Oracle for maintenance of its products but should make sure the selected party is capable of providing a quality level of service and is financially sound.  

Unnecessary Catastrophic Risk Events

Aug 24, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Knight Capital Group, a financial services firm engaged in market making and trading, lost $440 million when its systems accidentally bought too much stock that it had to unload at a loss and almost caused the collapse of the firm. The trading software had gone live without adequate testing. In other news, Wired reporter Mat Honan found his entire identity wiped out by hackers who took advantage of security flaws at Amazon.com Inc. and Apple Inc.

Focal Points:

  • Knight Capital – which handled 11 percent of all U. S. stock trading so far this year – lost $440 million when its newly upgraded systems accidentally bought too much stock that it had to unload at a loss. The system went live without adequate testing. Unfortunately, Knight Capital is not alone in the financial services sector with such a problem. NASDAQ was ill-prepared for the Facebook Inc. IPO, causing losses far in excess of $100 millions. UBS alone lost more than $350 million when its systems resent buy orders. In March, BATS, an electronic exchange, pulled its IPO because of problems with its own trading systems.
  • According to a blog post by Mat Honan "in the space of one hour, my entire digital life was destroyed. First my Google account was taken over, then deleted. Next my Twitter account was compromised, and used as a platform to broadcast racist and homophobic messages. And worst of all, my AppleID account was broken into, and my hackers used it to remotely erase all of the data on my iPhone, iPad, and MacBook." His accounts were daisy-chained together and once they got into his Amazon account, it was easy for them to get into his AppleID account and gain control of his Gmail and Twitter accounts. It turns out that the four digits that Amazon considers unimportant enough to display on the Web are precisely the same four digits that Apple considers secure enough to perform identity verification. The hackers used iCloud's "Find My" tool to remotely wipe his iPhone, iPad and then his MacBook within a span of six minutes. Then they deleted his Google account. Mat lost pictures and data he cannot replace but fortunately the hackers did not attempt to go into his financial accounts and rob him of funds.
  • All one initially needs to execute this hack is the individual's email address, billing address and the last four digits of a credit card number to get into an iCloud account. Apple will then supply the individual who calls about losing his password a temporary password to get access into the account. In this case the hacker got the billing address by doing a "whois" search on his personal domain. One can also look up the information on Spokeo, WhitePages, and PeopleSmart. To get the credit card information the hacker first needed to get into the target's Amazon account. For this he only needed the name on the account, email address, and the billing address. Once in, he added a bogus credit card number that conforms to the industry's self-check algorithm. On a second call to Amazon the hacker claimed to have lost access to the account and used the bogus information in combination with the name and billing address to add a new email address to the account. This allows the hacker to see all the credit cards on file in the account – but just the last four digits, which is all that is needed to hack into to one's AppleID account. From there on, the hacker could do whatever he wanted. Wired determined that it was extremely easy to obtain the basic information and hack into accounts. It duplicated the exploit twice in a matter of minutes.

RFG POV: The brokerage firm software failures were preventable but executives chose to assume the high risk exposure in pursuit of rapid revenue and profit gains. Use of code that has not been fully tested is not uncommon in the trading community, whereas it is quite rare in the retail banking environment. Thus, the problem is not software or the inability to validate the quality of the code. It is the management culture, governance and processes that are in place that allows software that is not fully tested to be placed into production. IT executives should recognize the impacts of moving non-vetted code to production and should pursue delivering a high quality of service. Even though the probability of failure may be small, if the risk is high (where you are betting the company or your job), it is time to take steps to reduce the exposure to acceptable levels. In the second case it is worth noting that with more than 94 percent of data in digital form commercial, government, and personal data are greatly exposed to hacking attacks by corporate, criminal, individual, or state players. These players are getting more sophisticated over time while businesses trail in their abilities to shore up exposures. Boards of Directors and executives will have to live with the constant risk of exposure but they can take steps to minimize risks to acceptable levels. Moreover, it is far easier to address the risk and security challenges in-house than it is in the cloud, where the cloud provider has control over the governance, procedures and technologies used to manage risks. IT executives are correct to be concerned about security in cloud computing solutions and it is highly likely that the full risk exposure cannot be known prior to adopting a vendor's solution. Nonetheless, Boards and executives need to vet these systems as best they can, as the risk fiduciary responsibility remains with the user organization and not the vendor. 

Blog Categories