Browsing articles tagged with " RFG"

California – Gone Too Far Again

Dec 13, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

California Governor Jerry Brown signed into laws Assembly Bill (AB) 1844, which restricts employers' access to employees' social media accounts, and Senate Bill (SB) 1349, which restricts schools' access to students' social media accounts. Due to the overbroad nature of the laws and the definition of social media, enterprises and schools may have difficulty complying while performing their fiduciary responsibilities.

Focal Points:

  • Although both laws expressly claim they are only regulating "social media," the definitions used in the laws goes well beyond true social media over the Internet. The statutes use the following definition: "social media" means an electronic service or account, or electronic content, including, but not limited to, videos, still photographs, blogs, video blogs, podcasts, instant and text messages, email, online services or accounts, or Internet Web site profiles or locations. In effect, the law governs all digital content and activity – whether it is over the Internet and/or stored in local storage devices on in-house systems.
  • Additionally, AB 1844, which covers employer-employee relationships, restricts employers' access to "personal social media" while allowing business-related access. However, the law does not define what comprises business or personal social media. It assumes that these classifications are mutually exclusive, which is not always the case. There have been multiple lawsuits over the years that have resulted from disagreements between the parties as to the classification of certain emails, files, and other social media.
  • Many organizations inform employees that email and social media activity performed while using the organization's computer systems is open to access and review by the company. Furthermore, some entities have employees sign an annual agreement to such rights. However, the law makes it illegal for employers to ask for login credentials to "personal" accounts and the statute does not allow access to mixed accounts, which supposedly do not exist.

RFG POV: The new California statutes are reminiscent of CA Senate Bill 1386 (SB 1386), which requires any state agency or entity that holds personal information of customers living in the state to divulge any infringement of databases that include personal information, regardless of the business' geographic location. The new laws do more harm than good and allow potential class action civil suits in addition to individual suits. This will make it more difficult for organizations to protect the entity, its image, enterprise data and client/student relationships, and ensure appropriate conduct guidelines and privacy requirements are being met. In addition, the ambiguities in the wording of the laws leave them open to interpretation, which in turn will eventually lead to lawsuits. Business and IT executives can expect these new laws to extend beyond the borders of the state of California, as did SB 1386. IT executives should review the legislation, discuss with legal advisors all elements of the laws, including the definitions, and explore ways to be proactive with their governance, guidelines and processes to prevent worst case scenarios from occurring.

Blog: Data Center Optimization Planning

Dec 13, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Every organization should be performing a data center optimization planning effort at least annually. The rate of technology change and the exploding requirements for capacity demand IT shops challenge their assumptions yearly and revisit best practices to see how they can further optimize their operations. Keeping up with storage capacity requirements with flat budgets can be a challenge in that capacity is growing between 20-40 percent annually. This phenomenon is occurring across the IT landscape. Thus, if IT executives want to transform their operations from spending 70-80 percent of their budgets on operations to more than half the budget spent on development and innovation instead, executives must invest in planning that enables such change.

Optimization planning needs to cover all areas of the data center:

  • facilities,
  • finance,
  • governance,
  • IT infrastructure and systems,
  • processes, and
  • staffing.

RFG finds most companies are greatly overspending due to the inefficiencies of continuing along non-optimized paths in each of the areas; thereby providing companies with the opportunity to reduce operational expenses by more than 10 percent per year for the next decade. In fact, in some areas more than 20 percent could be shaved off.

Facilities.  At a high level, the three areas that IT executives should understand, evaluate, and monitor are facilities design and engineering, power usage effectiveness (PUE), and temperature. Most data center facilities were designed to handle the equipment of the previous century. Times and technologies have changed significantly since then and the designs and engineering assumptions and actual implementations need to be reevaluated. In a similar vein, the PUE for must data centers is far from optimized, which could be resulting in overpaying energy bills by more than 40 percent. On the "easy to fix" front, companies can raise their data center temperatures to normal room temperature or higher, with temperatures in the 80° F range being possible. Just about all equipment built today is designed to operate at temperatures greater than 100° F. For every degree raised organizations can expect to see power costs reduced by up to four percent. Additionally, facilities and IT executives can monitor their greenhouse gas (GHG) emissions, which are frequently tracked by chief sustainability officers and can be used as a measure of savings achieved by IT operational efficiency gains.

Finance.  IT costs can be reduced through use of four key factors: asset management, chargebacks, life cycle management, and procurement. RFG finds many companies are not handling asset management well, which is resulting in an overage of hardware and software being paid for annually. Studies have found this excess cost could easily run up to 20 percent of all expenses for end-user devices. The use of chargebacks better ensures IT costs are aligned with user requirements. This especially comes into play when funding external and internal support services. When it comes to life cycle management, RFG finds too many companies are retaining hardware too long. The optimal life span for servers and storage is 36-40 months. Companies that retain this equipment for longer periods can be driving up their overall costs by more than 20 percent. Moreover, the one area that IT consistently fails to understand and underperforms on is procurement. When proper procurement processes and procedures are not followed and standardized, IT can easily spend 50 percent more on hardware, software and services.

Governance.  The reason governance is a key area of focus is that governance assures performance targets are established and tracked and that an ongoing continuous improvement program is getting the attention it needs. Additionally, governance can ensure that the reasonable risk exposure levels are maintained while the transformation is ongoing.

IT infrastructure and systems.  For each of the IT components – applications, networks, servers, and storage – IT executives should be able to monitor availability, utilization levels, and virtualization levels as well as automation level. The greater the levels the fewer human resources required to support the operations and the more staffing becomes an independent variable, rather than one dependent upon the numbers and types of hardware  and software used. Companies also frequently fail to match workload types to the infrastructure most optimized to those workloads, resulting in overspend that can reach 15-30 percent of operating costs for those systems.

Processes.  The major processes that IT management should be following are application instances (especially CRM and ERP), capacity management, provisioning (and decommissioning) rates, storage tiers, and service levels. The better a company is at capacity planning (and use of clouds) the lower the cost of operations. The faster the provisioning capability the fewer human resources required to support operational changes and the likelihood of less downtime due to human error. Additionally, RFG finds the more storage tiers and automation of movement of data amongst tiers the greater the savings. As a rule of thumb organizations should find the savings as one moves from tier n to tier n+1 to be 50 percent. In addition to tiering, compression and deduplication are other approaches to storage optimization.

Staffing.  For most companies today, staffing levels are directly proportional to the number of servers, storage, network nodes, etc. The shift to virtualization and automatic orchestration of activities breaks that bond. RFG finds it is now possible for hundreds of servers to be supported by a single administrator and tens to hundreds of terabytes handled by a single database administrator. IT executives should also be looking to cross-pollinate staff so that an administrator can support and of the hardware and operating systems.

The above possibilities are what exist today. Technology is constantly improving. The gains will be even greater as time goes on, especially since the technical improvements are more exponential than linear. IT executives should be able to plug these concepts into development of a data center optimization plan and then monitor results on an ongoing basis.

RFG POV: There still remains tremendous waste in the way IT operations are run today. IT executives should be able to reduce costs by more than 40 percent, enabling them to invest more in enhancing current applications and innovation than in keeping the lights on. Moreover, IT executives should be able to cut annual costs by 10 percent per year and potentially keep 40 percent of the savings to invest in self-funding new solutions that can further improve operations. 

Blog: Green Data Centers an Oxymoron

Nov 30, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

The New York Times published "Power, Pollution and the Internet," an article on the dark side of data centers. The report, which was the result of a yearlong investigation, highlights the facts related to the environmental waste and inefficiencies that can be found in the vast majority of data centers around the world. RFG does not contest the facts as presented in the article but the Times failed to fully recognize all the causes that led to today's environment and the use of poor processes and practices. Therefore, it can only be partially fixed – cloud computing notwithstanding – until there is a true transformation in culture and mindset.

New York Times Article

The New York Times enumerated the following energy-related facts about data centers:

  • Most data centers, by design, consume vast amounts of energy
  • Online companies run their facilities 24x7 at maximum capacity regardless of demand
  • Data centers waste 90 percent or more of the electricity they consume
  • Worldwide digital warehouses use about 30 billion watts of energy; U.S. accounts for 25 to 33 percent of the load
  • McKinsey & Company found servers use only six to 12 percent of their power consumption on real work, on average; the rest of the time the servers are idle or in standby mode
  • International Data Corp. (IDC) estimates there are now more than three million data centers of varying sizes worldwide
  • U.S. data centers use about 76 billion kWh in 2010, or roughly two percent of all electricity used in the country that year, according to a study by Jonathan G. Koomey.
  • A study by Viridity Software Inc. found in one case where of 333 servers monitored, more than half were "comatose" – i.e., plugged in, using energy, but doing little if any work. Overall, the company found nearly 75 percent of all servers sampled had a utilization of less than 10 percent.
  • IT's low utilization "original sin" was the result of relying on software operating systems that crashed too much. Therefore, each system seldom ran more than one application and was always left on.
  • McKinsey's 2012 study currently finds servers run at six to 12 percent utilization, only slightly better than the 2008 results. Gartner Group also finds the typical utilization rates to be in the seven to 12 percent range.
  • In a typical data center when all power losses are included – infrastructure and IT systems – and combined with the low utilization rates, the energy wasted can be as much as 30 times the amount of electricity used for data processing.
  • In contrast the National Energy Research Scientific Computing Center (NERSCC), which uses server clusters and mainframes at the Lawrence Berkeley National Laboratory (LBNL), ran at 96.4 percent utilization in July.
  • Data centers must have spare capacity and backup so that they can handle traffic surges and provide high levels of availability. IT staff get bonuses for 99.999 percent availability, not for savings on the electric bill, according to an official at the Electric Power Research Institute.
  • In the Virginia area data centers now consume 500 million watts of electricity and projections are that this will grow to one billion over the next five years.
  • Some believe the use of clouds and virtualization may be a solution to this problem; however, other experts disagree.

Facts, Trends and Missed Opportunities

There are two headliners in the article that are buried deep within the text. The "original sin" was not relying on buggy software as stated. The issue is much deeper than that and it was a critical inflection point. And to prove the point the author states the NERSCC obtains utilization rates of 96.4 percent in July with mainframes and server clusters. Hence, the real story is that mainframes are a more energy efficient solution and the default option of putting workloads on distributed servers is not a best practice from a sustainability perspective.

In the 1990s the client server providers and their supporters convinced business and IT executives that the mainframe was dead and that the better solution was the client server generation of distributed processing. The theory was that hardware is cheap but people costs are expensive and therefore, the development productivity gains outweighed the operational flaws within the distributed environment. The mantra was unrelenting over the decade of the 90s and the myth took hold. Over time the story evolved to include the current x86-architected server environment and its operating systems. But now it is turning out that the theory – never verified factually – is falling apart and the quick reference to the 96.4 percent utilization achieved by using mainframes and clusters exposes the myth.

Let's take the key NY Times talking points individually.

  • Data centers do and will consume vast amounts of energy but the curve is bending downward
  • Companies are beginning to learn to not run their facilities at less than maximum capacity. This change is relatively new and there is a long way to go.
  • Newer technologies – hardware, software and cloud – will enable data centers to reduce waste to less than 20 percent. The average data center today more than half of their power consumption on non-IT infrastructure. This can be reduced drastically. Moreover, as the NERSCC shows, it is possible to drive utilization to greater than 90 percent.
  • The multiple data points that found the average server utilization to be in the six to 12 percent range demonstrated the poor utilization enterprises are getting from Unix and Intel servers. Where virtualization has been employed, the utilization rates are up but they still remain less than 30 percent on average. On the other hand, mainframes tend to operate at the 80 to 100 percent utilization level. Moreover, mainframes allow for shared data whereas distributed systems utilize a shared-nothing data model. This means more copies of data on more storage devices which means more energy consumption and inefficient processes.
  • Comatose servers are a distributed processing phenomenon, mostly with Intel servers. Asset management of the huge server farms created by the use of low-cost, single application, scale-out hardware is problematic. The complexity caused by the need for orchestration of the farms has hindered management from effectively managing the data center complex. New tools are constantly coming on board but change is occurring faster than the tools can be applied. As long as massive single-application server farms exist, the problem will remain.
  • Power losses can be reduced from 30 times that used to less than 1.5 times.
  • The NERSCC utilization achievement would not be possible without mainframes.
  • Over the next five years enterprises will learn how to reduce the spare capacity and backup capabilities of their data centers and rely upon cloud services to handle traffic surges and some of their backup/disaster recovery needs.
  • Most data center staffs are not measured on power usage as most shops do not allocate those costs to the IT budget. Energy consumption is usually charged to facilities departments.
  • If many of the above steps occur, plus use of other processes such as the lease-refresh-scale-up delivery model (vs the buy-hold-scale-out model) and the standardized operations platform model (vs development selected platform model), then the energy growth curve will be greatly abated, and could potentially end up using less power over time.

Operations standard platforms (cloud)

Greater standardization and reduced platform sprawl but more underutilized systems

Least cost

Development selected platforms

Most expensive

Greater technical currency with platform islands and sprawl

Model philosophies

Buy-hold-scale-out

 

 

Lease-refresh-scale-up

 

  •  Clouds and virtualization will be one solution to the problem but more is needed, as discussed above.

RFG POV: The mainframe myths have persisted too long and have led to greater complexity, higher data center costs, inefficiencies, and sub-optimization. RFG studies have found that had enterprises kept their data on the mainframe while applications were shifted to other platforms, companies would be far better off than they are today. Savings of up to 50 percent are possible. With future environments evolving to processing and storage nodes connected over multiple networks, it is logical to use zEnterprise solutions to simplify the data environment. IT executives should consider mainframe-architected solutions as one of their targeted environments as well as an approach to private clouds. Moreover, IT executives should discuss the shift to a lease-refresh-scale-up approach with their financial peers to see if and how it might work in their shops.

CIO Ceiling, Social Success and Exposures

Nov 30, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

According to a Gartner Inc. survey, CIOs are not valued as much as other senior executives and most will have hit a glass ceiling. Meanwhile a Spredfast Inc. social engagement index benchmark report finds a brand’s level of social engagement is more influenced by its commitment to social business than its size. In other news, a New York judge forced Twitter Inc. to turn over tweets from one of its users.

Focal Points:

  • Recent Gartner research of more than 200 CEOs globally finds CIOs have a great opportunity to lead innovation in their organization, but they are not valued as strategic advisors by their CEOs, most of whom think they will leave the enterprise. Only five percent of CEOs rated their CIOs as a close strategic advisor while CFOs scored a 60 percent rating and COOs achieved a 40 percent rating. When it comes to innovation, CIOs fared little better – with five percent of CEOs saying IT executives were responsible for managing innovation. Gartner also asked the survey participants where they thought their CIO's future career would lead. Only 18 percent of respondents said they could see them as a future business leader within the organization, while around 40 percent replied that they would stay in the same industry, but at a different firm.
  • Spredfest gathered data from 154 companies and developed a social engagement index benchmark report that highlights key social media trends across the brand and assesses the success of social media programs against their peers. The vendor categorized companies into three distinct segments with similar levels of internal and external engagement: Activating, Expanding, and Proliferating. Amongst the findings was that a brand's level of social engagement is more influenced by its commitment to social business than its size. Social media is also no longer one person's job but averages about 29 people participating in social programs across 11 business groups and 51 social accounts. Publishing is heavier on Twitter but engagement is higher on Facebook, Inc. but what works best for a brand does depend on industry and audience. Another key point was that corporate social programs are multi-channel, requiring employees to participate in multiple roles. Additionally, users expect more high-quality content and segmented groups. One shortfall the company pointed out was that companies use social media as an opportunity for brand awareness and reputation but miss the opportunity to convert the exchange into subsequent actions and business.
  • Under protest Twitter surrendered the tweets of an Occupy Wall Street protester, Malcolm Harris, to a Manhattan judge rather than face contempt of court. The case became a media sensation after Twitter notified Harris about prosecutors' demands for his account. Mr. Harris challenged the demand but the judge ruled that he had no standing because the tweets did not belong to him. While the tweets are public statements, Mr. Harris had deleted them. Twitter asserts that users own their tweets and that the ruling is in error. Twitter claims there are two open questions with the ruling: are tweets public documents and who owns them. Twitter is appealing.

RFG POV: For the most part CIOs and senior IT executives have yet to bridge the gap from technologist to strategist and business advisor. One implication here is that IT executives still are unable to understand the business so that IT efforts are aligned with the business and corporate needs. To quote an ex-CIO at Kellogg's when asked what his role is said, "I sell cereal." Most IT executives do not think that way but need to. Until they do, they will not become strategic advisors, gain a seat at the table or have an opportunity to move up and beyond IT.  The Spredfest report shows that using social media has matured and requires attention like any other corporate function. Moreover, to get it to have a decent payback companies have to dedicate resources to keeping the content current and of high quality and to getting users to interact with the company. Thus, social media is no longer just an add-on but must be integrated with business plans and processes. IT executives should play a role in getting users to understand how to utilize social media tools and collaboration so that the enterprise optimizes its returns. The Twitter tale is enlightening in that information posted publicly may not be recalled (if the ruling holds) and can be used in court. RFG has personal experience with that. Years ago, in a dispute with WorldCom, RFG claimed the rates published on its Web site were valid at the time published. The telecom vendor claimed its new posting were applicable and had removed the older rates. When RFG was able to produce the original rate postings, WorldCom backed down. IT executives are finding a number of vendors are writing contracts with terms not written in the contract but posted online. This is an advantage to the vendors and a moving target for users. IT executives should negotiate contracts that have terms and conditions locked in and not changeable at the whim of the vendor. Additionally, enterprises should train staff on how to be careful about is posted in external social media. It can cost people their jobs as well as damage the company's financials and reputation. 

More Risk Exposures

Nov 30, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Hackers leaked more than one million user account records from over 100 websites, including those of banks and government agencies. Moreover, critical zero-day flaws were found in recently-patched Java code and a SCADA software vendor was charged with having default insecurity, including a hidden factory account with password. Meanwhile, millions of websites hosted by world's largest domain registrar, GoDaddy.com LLC, were knocked offline for a day.

Focal Points:

  • The hacker group, Team GhostShell, raided more than 100 websites and leaked a cache of more than one million user account records. Although the numbers claimed have not been verified, security firm Imperva noted that some breached databases contained more than 30,000 records. Victims of the attack included banks, consulting firms, government agencies, and manufacturing firms. Prominent amongst the data stolen from the banks were personal credit histories and current standing. A large portion of the pilfered files comes from content management systems (CMS), which likely indicates that the hackers exploited the same CMS flaw at multiple websites. Also taken were usernames and passwords. Per Imperva "the passwords show the usual "123456" problem.  However, one law firm implemented an interesting password system where the root password, "law321" was pre-pended with your initials.  So if your name is Mickey Mouse, your password is "mmlaw321".   Worse, the law firm didn't require users to change the password.  Jeenyus!" The group threatened to carry out further attacks and leak more sensitive data.
  • A critical Java security vulnerability that popped up at the end of August leverages two zero-day flaws. Moreover, the revelation comes with news that Oracle knew about the holes as early as April 2012. Microsoft Corp. Windows, Apple Inc. Mac OS X and Linux desktops running multiple browser platforms are all vulnerable to attacks. The exploit code first uses a vulnerability to gain access to the restricted sun.awt.SunToolkit class before a second bug is used to disable the SecurityManager, and ultimately to break out of the Java sandbox. Those that have left unpatched the vulnerabilities to the so-called Gondvv exploit that was introduced in the July 2011 Java 7.0 release are at risk since all versions of Java 7 are vulnerable. Notably older Java 6 versions appear to be immune. Oracle Corp. has yet to issue an advisory on the problem but is studying it; for now the best protection is to disable or uninstall Java in Web browsers. SafeNet Inc. has tagged a SCADA maker for default insecurity. The firm uncovered a hidden factory account, complete with hard-coded password, in switch management software made by Belden-owned GarrettCom Inc. The Department of Homeland Security's (DHS) ICS-CERT advisory states the vendor's Magnum MNS-6K management application allows an attacker to gain administrative privileges over the application and thereby access to the SCADA switches it manages. The DHS advisory also notes a patch was issued in May that would remove the vulnerability; however, the patch notice did not document the change. The vendor claims 75 of the top 100 power companies as customers.
  • GoDaddy has stated the daylong DNS outage that downed many of its customers' websites was not caused by a hacker (as claimed by the supposed perpetrator), but that the service interruption was not the result of a DDoS attack at all. Instead the provider claims the downtime was caused by "a series of network events that corrupted router tables." The firm says that it has since corrected the elements that triggered the outage and has implemented measures to prevent a similar event from happening again. Customer websites were inaccessible for six hours. GoDaddy claims to have as many as 52 million websites registered but has not disclosed how many of the sites were affected by the outage.

RFG POV: Risk management must be a mandatory part of the process for Web and operational technology (OT) appliances and portals. User requirements come from more places than the user department that requested the functionality; it also comes from areas such as audit, legal, risk and security. IT should always be ensuring their inputs and requirements are met. Unfortunately this "flaw" has been an IT shortfall for decades and it seems new generations keep perpetuating the shortcomings of the past. As to the SCADA bugs, RFG notes that not all utilities are current with the Federal Energy Regulatory Commission (FERC) cyber security requirements or updates, which is a major U.S. exposure. IT executives should be looking to automate the update process so that utility risk exposures are minimized. The GoDaddy outage is one of those unfortunate human errors that will occur regardless of the quality of the processes in place. But it is a reminder that cloud computing brings with it its own risks, which must be probed and evaluated before making a final decision. Unlike internal outages where IT has control and the ability to fix the problem, users are at the discretion of outsourced sites and the terms and conditions of the contract they signed. In this case GoDaddy not only apologized to its users but offered customers 30 percent across-the-board discounts as part of their apology. Not many providers are so generous. IT executives and procurement staff should look into how vendors responded to their past failures and then ensure the contracts protect them before committing to use such services. 

The HP, Oracle, SAP Dance

Aug 29, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Hewlett-Packard Co. announces reorganization and write-downs and gets good news from the courts that it has won its Intel Corp. Itanium lawsuit against Oracle Corp. Oracle must now port its software to Itanium-based servers. In other news, Oracle agreed to a $306 million settlement from SAP AG over their copyright infringement suit. However, the soap opera is not over – Oracle may still push for more.

Focal Points:

  • CEO Meg Whitman, in her continued attempt to turn the company around, is writing down the value of its Enterprise Services business by $8 billion and making management changes. HP paid $13.9 billion to acquire EDS back in 2008.  John Visentin, whom former HP CEO Leo Apotheker anointed to manage the Enterprise Services behemoth a year ago, is leaving the company.  Mike Nefkens, who runs Enterprise Services in the EMEA region, will head the global Enterprise Services group, which is responsible for HP's consulting, outsourcing, application hosting, business process outsourcing, and related services operations. Nefkens, who came from EDS, will report to the CEO but has been given the job on an "acting basis" so more changes lie ahead. In addition, Jean-Jacques Charhon, CFO for Enterprise Services, has been promoted to the COO position and will "focus on increasing customer satisfaction and improving service delivery efficiency, which will help drive profitable growth." HP services sales have barely exceeded one percent growth in the previous two fiscal years. HP further states the goodwill impairment will not impact its cash or the ongoing services business. The company also said its workforce reduction plan, announced earlier this year to eliminate about 27,000 people from its 349,600-strong global workforce, was proceeding ahead of schedule. However, since more employees have accepted the severance offer than expected, HP is increasing the restructuring charge from $1.0 billion to the $1.5-1.7 billion range. On the positive front, HP raised its third-quarter earnings forecast.
  • HP received excellent news from the Superior Court of the State of California when it ruled the contract between HP and Oracle required Oracle to port its software products to HP's Itanium-based servers. HP won on five different counts: 1) Oracle was in breach of contract; 2) the Settlement and Release Agreement entered into by HP, Oracle and Mark Hurd on September 20, 2010, requires Oracle to continue to offer its product suite on HP's Itanium-based server platforms and does not confer on Oracle the discretion to decide whether to do so or not; 3) the terms "product suite" means all Oracle software products that were offered on HP's Itanium-based servers at the time Oracle signed the settlement agreement, including any new releases, versions or updates of those products; 4) Oracle's obligation to continue to offer its products on HP's Itanium-based server platforms lasts until such time as HP discontinues the sales of its Itanium-based servers; and 5) Oracle is required to port its products to HP's Itanium-based servers without charge to HP. Oracle is expected to comply.
  • Oracle said it agreed to accept damages of $306 million settlement from German rival SAP to shortcut the appeals process in the TomorrowNow copyright infringement lawsuit. Oracle sued SAP back in 2007 when it claimed SAP's TomorrowNow subsidiary illegally downloaded Oracle software and support documents in an effort to pilfer Oracle customers. SAP eventually admitted wrongdoing and shut down the maintenance subsidiary. In November 2010, Oracle had originally won a $1.3 billion damages settlement, the largest ever awarded by a copyright jury but it was thrown out by the judge, who said Oracle could have $272 million or could ask for a retrial. To prevent another round of full-blown trial costs, the warring technology giants have agreed to the $306 million settlement plus Oracle's legal fees of $120 million; however, Oracle can now ask the appeals court judges to reinstate the $1.3 billion award. SAP stated the settlement is reasonable and the case has dragged on long enough.

RFG POV: HP suffers from its legacy product culture and continues to struggle to integrate services into a cohesive sales strategy. The company does well with the low-level technical services such as outsourcing but has not been able to shift to the higher margin, strategic consulting services. While the asset write-down was for the EDS acquisition, HP had its own consulting services organization (C&I) that it merged with EDS and atrophied. It took IBM Corp. more than 10 years to effectively bring its products and services sales groups together (it is still a work in progress). RFG therefore thinks it will take HP even longer before it can remake its culture to bring Enterprise Services to the level Meg Whitman desires. The HP Itanium win over Oracle should remove a dark cloud from the Integrity server line but a lot of damage has already been done. HP now has an uphill battle to restore trust and build revenues. IT executives interested in HP's Unix line combined with Oracle software should ensure that the desired software has been or will be ported by the time the enterprise needs it installed. The Oracle SAP saga just will not go away, as it is likely CEO Larry Ellison enjoys applying legal pressure to SAP (especially since the fees will be paid by the other party). It is a distraction for SAP executives but does not impair ongoing business strategies or plans. Nor will the outcome prevent other third parties from legally offering maintenance services. IT executives should not feel bound to use Oracle for maintenance of its products but should make sure the selected party is capable of providing a quality level of service and is financially sound.  

Unnecessary Catastrophic Risk Events

Aug 24, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Knight Capital Group, a financial services firm engaged in market making and trading, lost $440 million when its systems accidentally bought too much stock that it had to unload at a loss and almost caused the collapse of the firm. The trading software had gone live without adequate testing. In other news, Wired reporter Mat Honan found his entire identity wiped out by hackers who took advantage of security flaws at Amazon.com Inc. and Apple Inc.

Focal Points:

  • Knight Capital – which handled 11 percent of all U. S. stock trading so far this year – lost $440 million when its newly upgraded systems accidentally bought too much stock that it had to unload at a loss. The system went live without adequate testing. Unfortunately, Knight Capital is not alone in the financial services sector with such a problem. NASDAQ was ill-prepared for the Facebook Inc. IPO, causing losses far in excess of $100 millions. UBS alone lost more than $350 million when its systems resent buy orders. In March, BATS, an electronic exchange, pulled its IPO because of problems with its own trading systems.
  • According to a blog post by Mat Honan "in the space of one hour, my entire digital life was destroyed. First my Google account was taken over, then deleted. Next my Twitter account was compromised, and used as a platform to broadcast racist and homophobic messages. And worst of all, my AppleID account was broken into, and my hackers used it to remotely erase all of the data on my iPhone, iPad, and MacBook." His accounts were daisy-chained together and once they got into his Amazon account, it was easy for them to get into his AppleID account and gain control of his Gmail and Twitter accounts. It turns out that the four digits that Amazon considers unimportant enough to display on the Web are precisely the same four digits that Apple considers secure enough to perform identity verification. The hackers used iCloud's "Find My" tool to remotely wipe his iPhone, iPad and then his MacBook within a span of six minutes. Then they deleted his Google account. Mat lost pictures and data he cannot replace but fortunately the hackers did not attempt to go into his financial accounts and rob him of funds.
  • All one initially needs to execute this hack is the individual's email address, billing address and the last four digits of a credit card number to get into an iCloud account. Apple will then supply the individual who calls about losing his password a temporary password to get access into the account. In this case the hacker got the billing address by doing a "whois" search on his personal domain. One can also look up the information on Spokeo, WhitePages, and PeopleSmart. To get the credit card information the hacker first needed to get into the target's Amazon account. For this he only needed the name on the account, email address, and the billing address. Once in, he added a bogus credit card number that conforms to the industry's self-check algorithm. On a second call to Amazon the hacker claimed to have lost access to the account and used the bogus information in combination with the name and billing address to add a new email address to the account. This allows the hacker to see all the credit cards on file in the account – but just the last four digits, which is all that is needed to hack into to one's AppleID account. From there on, the hacker could do whatever he wanted. Wired determined that it was extremely easy to obtain the basic information and hack into accounts. It duplicated the exploit twice in a matter of minutes.

RFG POV: The brokerage firm software failures were preventable but executives chose to assume the high risk exposure in pursuit of rapid revenue and profit gains. Use of code that has not been fully tested is not uncommon in the trading community, whereas it is quite rare in the retail banking environment. Thus, the problem is not software or the inability to validate the quality of the code. It is the management culture, governance and processes that are in place that allows software that is not fully tested to be placed into production. IT executives should recognize the impacts of moving non-vetted code to production and should pursue delivering a high quality of service. Even though the probability of failure may be small, if the risk is high (where you are betting the company or your job), it is time to take steps to reduce the exposure to acceptable levels. In the second case it is worth noting that with more than 94 percent of data in digital form commercial, government, and personal data are greatly exposed to hacking attacks by corporate, criminal, individual, or state players. These players are getting more sophisticated over time while businesses trail in their abilities to shore up exposures. Boards of Directors and executives will have to live with the constant risk of exposure but they can take steps to minimize risks to acceptable levels. Moreover, it is far easier to address the risk and security challenges in-house than it is in the cloud, where the cloud provider has control over the governance, procedures and technologies used to manage risks. IT executives are correct to be concerned about security in cloud computing solutions and it is highly likely that the full risk exposure cannot be known prior to adopting a vendor's solution. Nonetheless, Boards and executives need to vet these systems as best they can, as the risk fiduciary responsibility remains with the user organization and not the vendor. 

Progress – Slow Going

Aug 13, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

According to Uptime Institute's recently released 2012 Data Center Industry Survey, enterprises are lukewarm about sustainability whereas a report released by MeriTalk finds federal executives see IT as a cost and not as part of the solution. In other news, the latest IQNavigator Inc. temporary worker index shows temporary labor rates are slowly rising in the U.S.

Focal Points:

  • According to Uptime Institute's recently released 2012 Data Center Industry Survey, more than half of the enterprise respondents stated energy savings were important but few have financial incentives in place to drive change. Only 20 percent of the organizations' IT departments pay the data center power bill; corporate real estate or facilities is the primary payee. In Asia it is worse: only 10 percent of IT departments pay for power. When it comes to an interest in pursuing a green certification for current or future data centers, slightly less than 50 percent were interested. 29 percent of organizations do not measure power usage effectiveness (PUE); for environments with 500 servers or less, nearly half do not measure PUE. Of those that do, more precise measurement methods are being employed this year over last. The average global, self-reported PUE from the survey was between 1.8 and 1.89. Nine percent of the respondents reported a PUE of 2.5 or greater while 10 percent claimed a PUE of 1.39 or less. Precision cooling strategies are improving but there remains a long way to go. Almost one-third of respondents monitor temperatures at the room level while only 16 percent check it at the most relevant location: the server inlet. Only one-third of respondents cited their firms have adopted tools to identify underutilized servers and devices.
  • A survey of 279 non-IT federal executives by MeriTalk, an online community and resource for government IT, finds more than half of the respondents said their top priorities include streamlining business processes. Nearly 40 percent of the executives cited cutting waste as their most important mission, and 32 percent said increasing accountability placed first on their to-do list. Moreover, less than half of the executives think of IT as an opportunity versus a cost while 56 percent stated IT helps support their daily operations. Even worse, less than 25 percent of the executives feel IT lends them a hand in providing analytics to support business decisions, saving money and increasing efficiency, or improving constituent processes or services. On the other hand, 95 percent of federal executives agree their agency could see substantial savings with IT modernization.
  • IQNavigator, a contingent workforce software and managed service provider, released its second quarter 2012 temporary worker rate change index for the U.S. Overall, the national rate trend for 2012 has been slowly rising and now sits five percentage points above the January 2008 baseline. However, the detail breakdown shows no growth in the professional-management job sector but movement from negative to 1.2 percent positive in the technical-IT sector. Since the rate of increase over the past six months remains less than the inflation rate over the same period, the company feels it is unclear whether or not the trend implies upwards pressure on labor rates. The firm also points out that the U.S. Bureau of Labor Statistics (BOL) underscores the importance of temporary labor as new hires increasingly are being made through temporary employment agencies. In fact, although temporary agency employees constitute less than two percent of the total U.S. non-farm labor force, 15 percent of all new jobs created in the U.S. in 2012 have been through temp agency placements.

RFG POV: Company executives may vocalize their support for sustainability but most have not established financial incentives designed to drive a transformation of their data centers to be best of breed "green IT" shops. Executives still fail to recognize that being green is not just good for the environment but it mobilizes the company to optimize resources and pursue best practices. Businesses continue to waste up to 40 percent of their IT budgets because they fail to connect the dots. Furthermore, the MeriTalk federal study reveals how far behind the private sector the U.S. federal government is. While businesses are utilizing IT as a differentiator to attain their goals, drive revenues and cut costs, the government perceives IT only as a cost center. Federal executives should modify their business processes, align and link their development projects to their operations, and fund their operations holistically. This will eliminate the sub-optimization and propel the transformation of U.S. government IT more rapidly. With the global and U.S. economies remaining weak over the mid- to long-term, the use of contingent workforce will expand. Enterprises do not like to make long-term investments in personnel when the business and regulatory climate is not friendly to growth. Hence, contingent workforce – domestic or overseas – will pick up the slack. IT executives should utilize a balanced approach with a broad range of workforce strategies to achieve agility and flexibility while ensuring business continuity, corporate knowledge, and management and technical control are properly addressed. 

Surprises at IBM, Infosys and Microsoft

Aug 7, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

IBM Corp. announced second quarter financial results with lower revenues but improved profits while Infosys Ltd. had weaker than expected first quarter 2013 results. In other financial news, Microsoft Corp. reported mixed fourth quarter and fiscal year 2012 results.

Focal Points:

  • IBM reported second quarter revenues of $25.8 billion, a drop of three percent year-over-year. However, net income on a GAAP basis increased by six percent to $3.9 billion from the previous year's quarter. Asia Pacific and the BRIC countries showed single digit growth while all other geographies declined. Europe/MidEast/Africa delivered the worst performance with a nine percent decline, although using a constant currency basis the revenues were flat. Similarly, the services sectors (GBS and GTS) were off four and two percent respectively from the same quarter last year. Global Financing and Software were flat while the Systems and Technology Group (STG) experienced a nine percent fall in revenues year-over-year. IBM's Smarter Planet initiative saw its revenues increase more than 20 percent in the quarter while its Power Systems gained market share through competitive displacements. Year-to-date IBM states its growth market revenues were up nine percent year-over-year while business analytics revenues grew 13 percent and cloud revenues doubled year-over-year. The company also saw its gross profit margins climb by 1.5 percentage points.
  • Infosys had less than stellar results for its first quarter 2013. While revenues grew 4.8 percent to $1.75 billion and IFRS net income climbed eight percent to $416 million year-over-year, on a sequential quarter basis, the company saw revenues drop by one percent and profits slide by more than 10 percent. Repeat business accounted for 99.1 percent of sales; the top 10 clients were responsible for 25.3 percent of the revenues. Utilization levels excluding trainees have been slowly dropping from 77.8 percent over the 12 months ending June 2011 to 71.6 percent in the current quarter. The split between onsite and offshore dropped slightly from 25.5 to 74.5 percent in the year ago quarter to 24.7 to 75.3 percent in the first quarter 2013. Attrition improved slightly to 14.9 percent. All geographic sector revenues declined with the exception of North America, which grew by 1.6 percent sequentially. As expected, Europe was the worst performer with a decline of 8.1 percent sequentially.
  • Microsoft announced fourth quarter 2012 revenues of $18.1 billion, an increase of four percent from the previous year's quarter. On a GAAP basis the company reported its first net loss of $492 million due to writing off $6.2 billion for its 2007 aQuantive acquisition. For the full fiscal year Microsoft reported revenues of $73.7 billion, a five percent jump from its fiscal year 2011 revenues. On a GAAP basis net income was $17 billion, a 26 percent decrease from the prior year. The Server and Tools business revenue grew 13 percent for the fourth quarter and 12 percent for the full year while the Business Division revenue increased 7 percent for the fourth quarter and full year reflecting continued momentum in Office 2010 sales. The Windows and Windows Live Division revenue declined 13 percent for the fourth quarter and 3 percent for the full year whereas the Online Services Division revenue advanced 8 percent for the fourth quarter and 10 percent for the full year reflecting growth in its search business. The Entertainment and Devices Division revenue jumped 20 percent for the fourth quarter and 8 percent for the full year, mostly due to the addition of Skype.

 

RFG POV: Most vendors note the difficulties that lie ahead over the next few quarters due to Euro zone problems, a slowdown in China, and a weak economy in North America as well as fears over oil prices and Middle East crisis. How well enterprises will do will depend upon the sector(s) they are in, the geographies they serve, and the agility and innovation of the firm. IBM, which has huge backlogs, is able to plow forward through the good times and bad. Its STG products continue to fluctuate depending upon age of the systems but overall IBM is on track to deliver against its five year growth plan. On the other hand, Infosys is failing to keep up with some of its outsourcing competitors and may be running into a management of growth problem. The drop in its utilization levels is a further indication that backlog and revenue management is not mapping to usage at the desired mix. Thus, while this is an overall corporate issue, the company still maintains tremendous customer loyalty and repeat business rate. In that the company is seeing weakness in most of its markets, IT executives should be more aggressive in negotiating blended rates and the overall deal. Microsoft marches on and continues to grow its enterprise businesses. The Windows business is impacted by the decline in PC sales (and growth in the Apple Inc. iPad market). There is the perception that enterprise business will improve when Windows 8 comes out later this year but that is unlikely. While slightly more than 50 percent of enterprises are on Windows 7, the other half are on Vista and XP. It takes years before companies migrate to new releases and the move to Windows 8, in that it is designed more for the personal world and tablets than the business world, most likely will not happen for most companies. RFG expects the majority of firms will wait for Windows 9. However, RFG does expect Skype and Yammer to be leverageable in the enterprise space but it is unclear whether or not Microsoft can leverage these cloud services to get organizations to move to its other cloud offerings. IT executives will continue to have more and more business platform alternatives available to them and therefore should not feel locked into Microsoft. Given that, IT executives should carefully analyze their business software requirements and negotiate for the best deals. Since Microsoft pricing can be complex and expensive, IT executives should consider using outside assistance (from RFG or elsewhere) to simplify the experience and obtain the best contractual prices and terms.

EMC, Intel, SAP, and VMware on the Move

Aug 3, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

 

EMC Corp. announced preliminary second quarter financial results along with executive changes at EMC and its subsidiary, VMware Inc. In other financial news, Intel Corp. reported its second quarter results, which saw its earnings drop while SAP AG reported strong second quarter financials.

Focal Points:

  • EMC and VMware made surprise announcements when word leaked out that VMware CEO Paul Maritz was being replaced. Joe Tucci, EMC Chairman and CEO stated the IT industry is in the midst of an extraordinary transformation unlike anything we have seen before – a major shift to Cloud Computing, Big Data applications and delivering IT-as-a-Service.  To capitalize on this shift Pat Gelsinger, EMC President and COO of Information Infrastructure Products, has been appointed CEO of VMware while Paul Maritz is joining EMC as Chief Strategist, reporting to Tucci. Both changes are effective September 1st. David Goulden, Executive Vice President and CFO, will assume the additional roles of President and COO of EMC effective immediately. On the financial front, EMC announced preliminary second-quarter 2012 results with record second quarter consolidated revenues of approximately $5.31 billion, up 10 percent year-over-year. The company also had record second quarter non-GAAP earnings per weighted average diluted share (EPS) of $0.39, up 11 percent over the previous year's quarter. Meanwhile, VMware is projecting second quarter revenues of $1.123 billion, an increase of 22 percent from second quarter 2011.
  • Intel reported second quarter revenues of $13.5 billion, up 3.6 percent year-over-year. Net income was $2.83 billion, down 4.3 percent from $2.95 billion a year earlier, as operating expenses rose faster than revenues. Consumer demand in North America and Western Europe is not recovering as fast as Intel expected, according to CEO Paul Otellini. He also stated growth in emerging markets such as China and Brazil is also slowing down. For the full fiscal year, Intel now expects sale to grow three to five percent from last year, rather than the "high single digit" level the company predicted earlier. He also noted that Ultrabooks are still relatively expensive but prices are expected to drop to $699 this fall.
  • In the quarter just ending, SAP announced it had total revenues of €3.9 billion, an increase of 18 percent over the €3.3 billion booked in second quarter of 2011. The company booked €1.06 billion in new license sales, up 26 percent compared to the year-ago period when it reported €0.84 billion. Software and support revenues for the quarter came to €3.12 billion, a jump of 21 percent. On an IFRS accounting basis, operating profits only rose by 7 percent in the quarter to €920 million. The company boasted of posting its tenth consecutive quarter of double-digit growth in non-IFRS software and software-related service revenues. The company also claimed it had stellar results in SAP HANA, mobile and cloud computing in all regions.

 

 

 

 

 

 

RFG POV: The management teams at EMC and VMware continue to expand and execute their visions of the future of IT and deliver top-tier products and services in a timely manner. The removal of Paul Maritz at VMware was first thought to be a rare management error but once the total set of announcements was made, the logic was compelling. With Pat Gelsinger at the helm of VMware and Maritz as EMC's chief strategist, the companies should be able to keep up the double-digit growth momentum that the firms have delivered over the past few years. IT executives with strategic relationships with either or both companies should get a strategic update by yearend so that they can understand the new vision and determine how it fits with the corporation's strategy and target architecture. Given the slowing demand and the decline of PC sales, it is not surprising that Intel did not perform as well as it has in the past. Until the company gets its Ultrabook and Atom product lines selling well, growth will be diminished or possibly shrinking. Apple Inc. is a formidable competitor and its products are expected to take market share from Intel for the next few years. The company has made some very significant advances in driving data center efficiency internally and if it can get its customers to follow suit, it might be able to get data center product and services sales making up for the slack in PC revenues. IT executives should add Intel to the list of IT firms to talk to about slashing the cost of data center operations.  SAP continues to plow on and remain a thorn in Oracle Corp.'s side. It has been able to revise its business model so that it can capture the new revenue streams without doing much damage to its traditional revenue routes. The company is well poised to address the new hot areas of cloud, mobile and high performance in-memory computing for business intelligence and analytics. IT executives should keep abreast of Oracle's and SAP's strategies and visions and, where appropriate, incorporate relevant components – and possibly products – into their future visions and target architectures.