Browsing articles in "Blog"

The Little Mainframe That Could

Aug 23, 2013   //   by admin   //   Blog  //  No Comments

RFG Perspective: The just-launched IBM Corp. zEnterprise BC12 servers are very competitive mainframes that should be attractive to organizations with revenues in excess of, or expanding to, $100 million. The entry level mainframes that replace last generation's z114 series can consolidate up to 40 virtual servers per core or up to 520 in a single footprint for as low as $1.00 per day per virtual server. RFG projects that the zBC12 ecosystem could be up to 50 percent less expensive than comparable all-x86 distributed environments. IT executives running Java or Linux applications or eager to eliminate duplicative shared-nothing databases should evaluate the zBC12 ecosystem to see if the platform can best meet business and technology requirements.

Contrary to public opinion (and that of competitive hardware vendors) the mainframe is not dead, nor is it dying. In the last 12 months the zEnterprise mainframe servers have extended growth performance for the tenth straight year, according to IBM. The latest MIPS (millions of instructions per second) installed base jumped 23 percent year-over-year and revenues jumped 10 percent. There have been 210 new accounts since the zEnterprise launch as well as 195 zBX units shipped. More than 25 percent of all MIPS are IFLs, specialty engines that run Linux only, and three-fourths of the top 100 zEnterprise customers have IFLs installed. The ISV base continues to grow with more than 7,400 applications available and more than 1,000 schools in 67 countries participate in the IBM Academic Initiative for System z. This is not a dying platform but one gaining ground in an overall stagnant server market. The new zBC12 will enable the mainframe platform to grow further and expand into lower-end markets.

zBC12 Basics

The zBC12 is faster than the z114, using a 4.2GHz 64-bit processor and has twice the maximum memory of the z114 at 498 GB. The zBC12 can be leased starting at $1,965 a month, depending upon the enterprise's credit worthiness, or it can be purchased starting at $75,000. RFG has done multiple TCO studies on the zEnterprise Enterprise Class server ecosystems and estimates the zBC12 ecosystem could be 50 percent less expensive than x86 distributive environments having the equivalent computing power.

On the analytics side, the zBC12 offers the IBM DB2 Analytics Accelerator that IBM says offers significantly faster performance for workloads such as Cognos and SPSS analytics. The zBC12 also attaches to Netezza and PureData for Analytics appliances for integrated, real-time operational analytics.

Cloud, Linux and Other Plays

On the cloud front, IBM is a key contributor to OpenStack, an open and scalable operating system for private and public clouds. OpenStack was initially developed by RackSpace Holdings and currently has a community of more than 190 companies supporting it including Dell Inc., Hewlett-Packard Co. (HP), IBM, and Red Hat Inc. IBM has also added its z/VM Hypervisor and z/VM Operating System APIs for use with OpenStack. By using this framework, public cloud service providers and organizations building out their own private clouds can benefit from zEnterprise advantages such as availability, reliability, scalability, security and costs.

As stated above, Linux now accounts for more than 25 percent of all System z workloads, which can run on zEnterprise systems with IFLs or on a Linux-only system. The standalone Enterprise Linux Server (ELS) uses the z/VM virtualization hypervisor and has available more than 3,000 tested Linux applications. IBM provides a number of specially-priced zEnterprise Solution Editions, including the Cloud-Ready for Linux on System z, which turns the mainframe into an Infrastructure-as-a-Service (IaaS) platform. Additionally, the zBC12 comes with EAL5+ security, which satisfies the high levels of protection on a commercial server.

The zBC12 is an ideal candidate for mid-market companies to act as the primary data server platform. RFG believes organizations will save up to 50 percent of their IT ecosystem costs if the mainframe handles all the data serving, since it provides a shared-everything data storage environment. Distributed computing platforms are designed for shared-nothing data storage, which means duplicate databases must be created for each application running in parallel. Thus, if there are a dozen applications using the customer database, then there are 12 copies of the customer file in use simultaneously. These must be kept in sync as best as possible. The costs for all the additional storage and administration can make the distributed solution more costly than the zBC12 for companies with revenues in excess of $100 million. IT executives can architect the systems as ELS only or with a mainframe central processor, IFLs and zBX for Microsoft Corp. Windows applications, depending on the configuration needs.

Summary

The mainframe myths have misled business and IT executives into believing mainframes are expensive and outdated, and led to higher data center costs and sub-optimization for mid-market and larger companies. With the new zEnterprise BC12 IBM has an effective server platform that can counter the myths and provide IT executives with a solution that will help companies contain costs, become more competitive, and assist with a transformation to a consumption-based usage model.

RFG POV: Each server platform is architected to execute certain types of application workloads well. The BC12 is an excellent server solution for applications requiring high availability, reliability, resiliency, scalability, and security. The mainframe handles mixed workloads well, is best of breed at data serving, and can excel in cross-platform management and performance using its IFLs and zBX processors. IT executives should consider the BC12 when evaluating platform choices for analytics, data serving, packaged enterprise applications such as CRM and ERP systems, and Web serving environments.

CEOs CIOs not in Sync

May 7, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

According to a post on the Harvard Business Review blog CEOs and CIOs are not in sync when it comes to the new challenges and issues CEOs are facing. Study findings point to the fact that CIOs do not understand where the business needs to go, and CIOs do not have a strategy to address business challenges or opportunities.

Focal Points:

  • Key findings from their research are almost half of the CEOs feel IT should be a commodity service purchased as needed. Almost half of the CEOs rate their CIOs negatively in terms of understanding the business and how to apply IT in new ways to the business. Only 25 percent of executives felt their CIOs were performing above their peers. Moreover, 57 percent of CEOs expect their IT function to change significantly over the next three years, while 12 percent predict a "complete overhaul" of IT.
  • The above findings are attributed to four trends that are changing the CIOs role. Many CEOs are moving away from ownership and return on assets or investment (ROA or ROI) analyses and are thinking about renting IT equipment for items not directly tied to value creation. The shift from efficiency and scalability to agility and efficacy translates into a movement away from transactional systems to new systems that provide agility, collaboration, and transparency. Thirdly, the boundaries between contractors, channels, customers, partners, staff, suppliers, and even competitors are diminishing and in some cases disappearing, creating a whole new user community for enterprise IT systems. All of this changes how companies manage and organize work and resources, which suggests the need for more unique, niche applications with integration of information and systems across organizational and agent boundaries.
  • In summary it states there new systems, business and delivery models, types of information, technologies, and whole new roles for IT in the enterprise's ecosystem. These new business insights, tied to the emergence of new technologies, are creating an opportunity for IT to lead business transformational efforts, creating new business models, initiating new business processes and making the enterprise agile in this challenging economic environment, the report concludes.

RFG POV: Business executives that think IT should be a commodity service purchased as needed do not perceive IT as a business differentiator. That is problematic for their businesses and for IT executives that work for them. IT executives in those organizations need to enlighten the business executives on the flaws in their thinking. As to the four trends identified, RFG and other studies have also found these to be true, which is why RFG has been pushing for IT executives to transform their operations. Business and IT always exist in a state of change, including disruptive innovation, and the next decade will be no different. IT executives must work with business executives to help transform the business and expose them to new process possibilities that are available due to the emerging technologies. IT executives must believe (and pursue) their role is to sell the business – e.g., sell cereal if they work for Kellogg's – and not be a "tech head" if they want a seat at the business table.

Disruptive Changes

Apr 25, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Amazon Inc. and Microsoft Corp. lowered their pricing for certain cloud offerings in attempts to maintain leadership and/or preserve customers. Similarly, Hewlett-Packard Co. (HP) launched its next-generation Moonshot hyperscale servers. Meanwhile, IDG Connect, the demand generation division of International Data Group (IDG), released its survey findings that show there may be a skills shortage when it comes to the soft skills required when communicating beyond the IT walls.

Focal Points:

  • Earlier this month Amazon price reduced the prices it charged for its Windows on-demand servers by up to 26 percent. This brought its pricings within pennies of Microsoft's Windows Azure cloud fees. The price reductions apply across Amazon's standard (m1), second-generation standard (m3), high-memory (m2), and high-CPU (c1) instance families. CEO Jeff Bezos stated in the Amazon annual report the strategy of cutting prices before the company needs to, and developing technologies before there is a financially motivating factor, is what protects the company from unexpected markets shifts. Microsoft has responded by aggressively cutting its prices by 21 to 33 percent for hosting and processing customer online data. In order for customers to qualify for the cuts they must make monthly commitments to Azure for either six or 12 months. Microsoft also is making its Windows Azure Virtual Network technology (codenamed "Brooklyn") generally available effective April 16. Windows Azure Virtual Network is designed to allow companies to extend their networks by enabling secure site-to-site VPN connectivity between the enterprise and the Windows Azure cloud.
  • HP launched its initial Moonshot servers, which use Intel Corp. Atom low-cost, low-energy microprocessors, This next-generation of servers is the first wave of hyperscale software defined server computing models to be offered by HP. These particular servers are designed to be used in dedicated hosting and Web front end environments. The company stated that two more "leaps" will be out this year that will be targeted to handle other specific workloads. HP claims its architecture can scale 10:1 over existing offerings while providing eight times the efficiency. The Moonshot 1500 uses Intel Atom S1200 microprocessors, utilizes a 4.3U (7.5 inch tall) chassis that hosts 45 "Gemini" server cartridges, and up to 1800 quad-core servers will fit into a 42U rack. Other x86 chips from Advanced Micro Devices Inc. (AMD), plus ARM processors from Calxeda Inc., Texas Instruments Inc., and Applied Micro Circuits Corp. (AMCC) are also expected to be available in the "Gemini" cartridge form factor. The first Moonshot servers support Linux, but are compatible with Windows, VMware and traditional enterprise applications. Pricing starts at $61,875 for the enclosure, 45 HP ProLiant Moonshot servers and an integrated switch, according to HP officials. (For more on this topic see this week's Research Note "HP's Moonshot – the Launch.")
  • According to a new study by IDG Connect, 83 percent of European respondents believe there is no IT skills shortage while 93 percent of U.S. respondents definitely feel there is a gap between the technical skills IT staff possess and the skills needed by the respondents' companies. IDG attributes this glaring differentiation to what are loosely defined as "hard" (true technical skills and competencies) and "soft" (business, behavioral, communications, and interpersonal) skills. The European respondents focused on hard skills while their American counterparts were more concerned about the soft skills, which will become more prevalent within IT as it goes through a transformation to support the next-generation data center environments and greater integration with the business. As IT becomes more integrated with the business and operational skill requirements shift, IDG concludes "companies can only be as good as the individuals that work within them. People … are capable of creative leaps of thinking and greatness that surpass all machines. This means that any discussion on IT skills, and any decision on the qualities required for future progression are fundamental to innovation. This is especially true in IT, where the role of the CIO is rapidly expanding within the enterprise and the department as a whole is becoming increasingly important to the entire business. It seems IT is forever teetering on the brink of bigger and better things - and it is up to the people within it to maximize this potential."

RFG POV: IT always exists in a state of disruptive innovation and the next decade will be no different. Whether it is a shift to the cloud, hyperscale computing, software-defined data center or other technological shifts, IT must be prepared to deal with the business and pricing models that arise. Jeff Bezos is correct by not sitting on his laurels and constantly pushing the envelope in pricing and services. IT executives need to do the same and deliver comparable services at prices that appeal to the business while covering costs. This requires keeping current on technology and having the staff on board that can solve the business problems and deliver innovative solutions that enable the organization to remain competitive. RFG expects the staffing dilemma to emerge over the next few years as data centers transform to meet the next generation of business and IT needs. At that time most IT staff will not need the current skills they use but skills that allow them to work with the business, providers and others to deliver solutions built on logical platforms (rather than physical infrastructure). Only a few staff will need to know the nuts and bolts of the hardware and physical layouts. This paradigm shift in staff capabilities and skills must be anticipated if IT executives do not want to be caught behind the curve and left to struggle with catching up with demand. IT executives should be developing their next-generation IT development and operations strategies, determining skills needed and the gap, and then begin a career planning and weaning-out process so that IT will be able to provide the leadership and skills needed to support the business over the next decade of disruptive innovation. Additionally, IT executives should determine if Moonshot servers are applicable in their current or target environments, and if so, conduct a pilot when the time is right. 

Service Delivery to Business Enablement: Data Center Edition

Apr 9, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Adam Braunstein

I have never been a fan of alarmist claims. Never have I witnessed the sky falling or the oceans abruptly swallowing masses of land. Nonetheless, we have all seen the air become unsafe to breathe in many parts of the world and rising water levels are certainly cause for concern. When rapid changes occur, those progressions do not take place overnight and often require a distanced perspective. Secondly, being paranoid does not mean one is wrong.

Such is the case with the shifts occurring in the data center. Business needs and disruptive technologies are more complex, frequent, and enduring despite their seemingly iterative nature. The gap between the deceptively calm exterior and true nature of internal data center changes threatens to leave IT executives unable to readily adapt to the seismic shifts taking place beneath the surface. Decisions made to address long-term needs are typically made using short-term metrics that mask the underlying movements themselves and the enterprise need to deal strategically with these changes. The failure to look at these issues as a whole will have a negative cascading effect on enterprise readiness in the future and is akin to France's Maginot Line of defense against Germany in World War II. While the fortifications prevented a direct attack, the tactic ignored the other strategic threats including a Belgium-based attack.

Three-Legged Stool:  Business, Technology, and Operations

The line between business and technology has blurred such that there is very little difference between the two. The old approach of using technology as a business enabler is no longer valid as IT no longer simply delivers the required business services. Business needs are now so dependent on technology that the planning and execution need to exist using same game plan, analytic tools, and measurements. Changes in one directly impact the other and continuous updates to strategic goals and tactical executions must be carefully weighed as the two move forward together. Business enablement is the new name of the game.

With business and technology successes and failures so closely fused together, it should be abundantly clear why shared goals and execution strategies are required. The new goalposts for efficient, flexible operations are defined in terms of software-defined data centers (SDDCs). Where disruptive technologies including automation, consolidation, orchestration and virtualization were previously the desired end state, SDDCs up the ante by providing logical views of platforms and infrastructures such that services can be spooled up, down, and changed dynamically without the limitations of physical constraints. While technology comprises the underpinnings here, the enablement of dynamic and changing business goals is the required outcome.

Operations practices and employee roles and skills will thus need to rapidly adapt. Metrics like data density, workload types and utilization will remain as baseline indicators but only as a means to more important measurements of agility, readiness, productivity, opportunity and revenue capture. Old technologies will need to be replaced to empower the necessary change, and those new technologies will need to be turned over at more rapid rates to continue to meet the heightened business pace as well as limited budgets. Budgeting and financial models will also need to follow suit.

The Aligned Business/IT Model of the Future: Asking the Right Questions

The fused business/IT future will need to be based around a holistic, evolving set of metrics that incorporate changing business dynamics, technology trends, and performance requirements. Hardware, software, storage, supporting infrastructure, processes, and people must all be evaluated to deliver the required views within and across data centers and into clouds. Moreover, IT executives should incorporate best-of-breed information enterprise data centers in both similar and competing industries.

The set of delivered dashboards should provide a macro view of data center operations with both business and IT outlooks and trending. Analysis should provide the following:

  • Benchmark current data center performance with comparative data;
  • Demonstrate opportunities for productivity and cost cutting improvements;
  • Provide insight as to the best and most cost effective ways to align the data center to be less complex, more scalable, and able to meet future business and technology opportunities;
  • Offer facilities to compare different scenarios as customers determine which opportunities best meet their needs.

Even though the reality of SDDCs is years away, IT executives must be travelling on the journey now. There are a number of intermediary milestones that must be achieved first and delays in reaching them will negatively impact the business. Use of data center analytical tools as described above will be needed to help chart the course and monitor progress. (The GreenWay Collaborative develops and provides tools of this nature. RFG initiated and still contributes to this effort.)

RFG POV: IT executives require a three-to-five year outlook that balances technology trends, operational best practices, and business goals. Immediate and long-range needs need to be plotted, moved, and continuously measured to mitigate immediate and long term needs. While many of these truths are evergreen, it is essential to recognize that the majority of enterprise tools and practices inadequately capture and harmonize the contributing factors. Most enterprise dashboard views evaluate data center performance at a tactical, operational level and identify opportunities for immediate performance improvements. Strategic enterprise dashboard tools tend to build on the data gathered at the tactical level and fail to incorporate evolving strategic business and technology needs. IT executives should incorporate strategic data center optimization planning tools which address the evolving business and technology needs to the mix so that IT can provide the optimum set of services to the business at each milestone. 

Tectonic Shifts

Mar 11, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Bellwether Cisco Systems Inc.'s quarterly results beat expectations while CEO John Chambers opined global business was looking cautiously optimistic. In other system news, IBM Corp. made a series of hardware announcements, including new entry level Power Systems servers that offer better total cost of acquisition (TCA) and total cost of ownership (TCO) than comparable competitive Intel Corp. x86-based servers. Meanwhile, the new 2013 Dice Holdings Inc. Tech Salary Survey finds technology professionals enjoyed the biggest pay raise in a decade last year.

Focal Points:

  • Cisco reported its fiscal second quarter revenues rose five percent to $12.1 billion versus the previous year's quarter. Net income on a GAAP basis increased 6.2 percent to $2.7 billion. The company's data center business grew 65 percent compared with the previous year, while its wireless business and service provider video offerings gained 27 and 20 percent, respectively. However, Cisco's core router and switching business did not fare as well, with the router business shrinking six percent and the switching revenues only climbing three percent. EMEA revenues shrank six percent year-over-year while the Americas and Asia Pacific climbed two and three percent, respectively. CEO Chambers warned the overall picture was mixed with parts of Europe remaining very challenging. However, he stated there are early signs of stabilization in government spending and also in probably a little bit over two thirds of Europe. While there is cautious optimism, there is little tangible evidence that Cisco has turned the corner.
  • IBM's Systems and Technology Group launched a number of systems and solutions across its product lines, including new PureSystems solutions, on February 5. As part of the announcement was more affordable, more powerful Power Systems servers designed to aggressively take on Dell Inc., Hewlett-Packard Co. (HP), and Oracle Corp. The upgraded servers are based upon the POWER7+ microprocessors and have a starting price as low as $5,947 for the Power Express 710. IBM stated the 710 and 730 are competitively priced against HP's Integrity servers and Oracle's Sparc servers while the PowerLinux 7R1 and 7R2 servers are very aggressively priced to garner market share from x86 servers.
  • Dice, a job search site for engineering and technology professionals, recently released its 2013 Tech Salary Survey. Amongst its key findings was that technology salaries saw the biggest year-over-year salary jump in over a decade, with the average salary increasing 5.3 percent. Additionally, 64 percent of 15,049 surveyed in late 2012 are confident they can find favorable new positions, if desired. Scot Melland, CEO of Dice Holdings, stated companies will now have to either pay to recruit or pay to retain and today, companies are doing both for IT professionals. The top reasons for changing jobs were greater compensation (67 percent), better working conditions (47 percent) and more responsibility (36 percent). David Foote, chief analyst at Foote Partners LLC, finds IT jobs have been on a "strong and sustained growth run" since February 2012. By Foote Partners' calculations, January IT employment showed its largest monthly increase in five years. Foote believes the momentum is so powerful that it is likely to continue barring a severe and deep falloff in the general economy or a catastrophic event. Based on Bureau of Labor Statistics (BLS) data, Foote estimates a gain of 22,100 jobs in January across four IT-related job sectors, whereas the average monthly employment gains from October to December 2012 were 9,700.

RFG POV: While the global economic outlook appears a little brighter than last year, indications are it may not last. Executives will have to carefully manage spending; however, with the need to increase salaries to retain talent this year, extra caution must be undertaken in other spending areas. IT executives should consider leasing IT equipment, software and services for all new acquisitions. This will help to preserve capital while allowing IT to move forward aggressively on innovation, enhancement and transformation projects. RFG studies find 36 to 40 month hardware and software leases are optimum and can be less expensive than purchasing or financing, even over a five year period. Moreover, IBM's new entry level Power Systems servers are another game-changer. An RFG study found that the three-year TCA for similarly configured x86 systems handling the same workload as the POWER7+ systems can be up to 75 percent more expensive while the TCO of the x86 servers can be up to 65 percent more expensive. Furthermore, the cost advantage of the Power Systems could even be greater if one included the cost of development systems, application software and downtime impacts. IT executives should reevaluate its standards for platform selection based upon cost, performance, service levels and workload and not automatically assume that x86 servers are the IT processing answer to all business needs.

Wither Dell?

Feb 12, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

CEO Michael Dell is coordinating a buyout of Dell Inc. for $24.4 billion in the hopes that the company can more effectively go through its transformation if it does not have to deal with reporting results quarterly to fickle investors. Michael Dell's MSD Capital (his investment firm), has teamed with Silver Lake Partners to take the company private. Microsoft Corp. will be assisting in the buyout in the form of a $2 billion loan. If the buyout is successful – which it should be at some price – what does it portend for IT executives and commercial accounts?

To understand where Dell needs to go, one needs to first see where it is. Dell started as a low-cost PC company in the consumer market. It gradually switched to a bifurcated model – PC for consumers and PC and servers for the commercial space, primarily the public, small and medium business (SMB), and large enterprise markets. Over the past six years the company acquired 22 companies – 10 in 2012 alone – and expanded into other hardware components, software and services, including cloud services. But the company has lost its momentum. It lost PC market share and sales in 2012 faster than most of its competitors, which is disastrous for a company that derives more than half of its revenues from end-user computing solutions.

Smartphones and tablets have curtailed the growth of the traditional PC market and Dell's commercial business has not made up for the loss in end-user revenues. In fact, in both businesses Dell is considered a low-cost commodity hardware provider and not a market or thought leader. The company has not fully integrated all of its acquisitions and is struggling to reach its strategic goal of becoming a one-stop shop. The buyout gives the company time to re-think and execute a long-term strategy, reorganize and change its culture. As CEO Meg Whitman at Hewlett-Packard Co. (HP) can attest, a turnaround is a multi-year effort and doing so in public when quarterly results can be volatile is not fun. Thus, the desire by Michael Dell to go private.

While there are a number of challenges that Dell must address, there are two that will make or break the success of the new corporate strategy. The vendor must either exit the end-user computing market or once again become a market leader. It is lacking products in the key current and future end-user markets and it cannot regain its position with just PC solutions to hawk. Secondly, Dell has not been able to transition from a culture of transaction selling to one of relationship sales. If the vendor is to become one of the top one-stop providers in the commercial space, it will have to invest in customer relationship management. This is a massive cultural change that goes to the core of the company. HP has struggled with the clash of this cultural divide since it acquired Compaq in 2002. IBM Corp. took more than 10 years to change its culture. The underlying question will be whether or not CEO Dell, by trade a transactional salesman, can lead the culture shift to succeed with its new corporate vision.

In addition to the above challenges, there are a number of other key issues to be resolved. IT executive relationships with Dell depend on how these shake out.

Assets.  Dell will need to decide which assets it has today are worth keeping and which are to be shed. In strong customer relationship management organizations, people are a primary asset. Will Dell address this? Additionally, once it has its strategic vision in place, what additional acquisitions are needed to complete the puzzle? Will the new Dell have the funds to acquire the companies it needs or will the buyout end up choking the firm's ability to compete effectively? Dell recently moved into the equipment leasing space. Will it have the wherewithal to remain?

Business Model. What will Dell's new business model be? It will have to compete with HP, IBM and Oracle Corp. – all of whom are innovators, bring more than commodity products and services to the table, and want to own the complete business relationship with their customers. Each has a different business model. Where will the new Dell position itself?

Business Partners and Channels. Dell will have to re-evaluate how it works with business partners and uses various sales and distribution channels. Dell does have a cloud presence but can it leverage it the way Apple Inc. or Google Inc. do? Can it be a full service provider and still utilize business partners and channels effectively? Without strong business partners and channels Dell will not be able to compete effectively.

Microsoft. Microsoft did not become an owner but a lender to Dell. This will cost the company more than just money. Will it restrict the vendor from providing certain products or solutions?

Processes. Dell needs to revamp its development, operations, and sales processes to be fully integrated and customer relationship based. The customer must come first; not the products or services. This will be a long-term change, which may be agonizing at times.

Technology. Today Dell assembles some products and has the intellectual property (IP) for those products and services that the company acquired. Can it leverage the IP and become recognized as an innovator or will the IP assets wither and the talent depart? Over the past year Dell has been bringing on board the resources to take advantage of the assets. Will the new Dell continue down the same path? If Dell stays in the end-user computing space, will it be able to figure out how to do mobility and social (key components to staying competitive)? If not, will it bite the bullet and exit the business?

The company was at one time the leader in the PC arena. Then it became one of the top players. Now it wants to be a leader in the full-service enterprise space where it is not a top player and is losing momentum.

RFG POV: Dell has a long, tough transformation ahead. By going private it will no longer have to worry about the stock market price but will still have to answer to investors. RFG does not expect the company to pull out of any markets in the near term – although the printing and peripherals business is exposed – but a number of the executives and employees whose visions are out of sync with new direction will depart. In the full-service enterprise space Dell will have to be more than a low-cost provider. It must become a hardware, software, and services innovator, determine its positioning vis-à-vis competitors, make additional acquisitions to fill in the gaps, and spend time and resources building relationships that may not yield near-term revenues. Whether or not the stakeholders will allow the company to spend enough money and time to make the conversion is an open question. The fallback position may be to go back to being a low-cost or custom commodity provider to the commercial market.  Moreover, Dell will have to invest in a new end-user computing model, watch its market share shrivel, or quit the space. One thing is for sure – it cannot be all things to all players and must pick its choices carefully. Dell must articulate its strategy to business partners, customers, and employees over the next three to six months or loyalty may falter. In any event, IT executives should expect Dell to provide support and a smooth transition for businesses that are divested, restructured, or sold. IT executives desirous of using Dell as a strategic provider should continue to work closely with Dell, keep abreast of its strategy and roadmaps and factor the knowledge into the corporate decision-making process. Additionally, IT executives should not be surprised or concerned to find the company fails to make the short-list of candidates. There are plenty of options these days.

 

HP Cloud Services, Cloud Pricing and SLAs

Jan 9, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Hewlett-Packard Co. (HP) announced the HP Cloud Compute made generally available in Dec. 2012 while the HP Cloud Block Storage cloud entered beta at that time. HP claims its Cloud Compute has an industry leading availability service level agreement (SLA) of 99.95 percent. Amazon Inc.'s S3 and Microsoft Corp.'s Windows Azure clouds reduced their storage pricing.

Focal Points:

  • HP released word that the HP Cloud Compute moved to general availability on Dec. 5, 2012 and will offer a 99.95 percent monthly SLA (a maximum of 22 minutes of downtime per month). The company extended the 50 percent discount on pricing until January. The HP Compute cloud is designed to allow businesses of all sizes to move their production workloads to the cloud. There will be three separate availability zones (AZs) per region. It supports Linux and Windows operating systems and comes in six different instance sizes, with prices starting at $0.04/hour. HP is currently supporting Fedora, Debian, CentOS, and Ubuntu Linuxes, but not Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES). On the Windows side, HP is live with Windows Server 2008 SP2 and R2 while Windows Server 2012 is in the works. There are sites today on the East and West coasts of the U.S. with a European facility operational in 2013. Interestingly, HP built its cloud using ProLiant servers running OpenStack and not CloudSystem servers. Meanwhile, HP's Cloud Block Storage moved to public beta on Dec. 5, 2012; customers will not be charged until January at which time pricing will be discounted by 50 percent. Users can create custom storage volumes from 1 GB to 2 TB. HP claims high availability for this service as well and claims each storage volume automatically is replicated within the same availability zone.
  • Amazon is dropping its S3 storage pricing by approximately 25 percent. The first TB/month goes from $0.125 per GB/month to $0.095 per GB/month, a 24 percent reduction. The next 49 TB prices per GB/month fall to $0.080 from $0.110 while the next 450 TB drops from $0.095 to $0.070. This brings Amazon's pricing in line with Google Inc.'s storage pricing. According to an Amazon executive S3 stores well over a trillion objects and services 800,000 requests a second. Prices have been cut 23 times since the service was launched in 2006.
  • In reaction to Amazon's actions Microsoft's Windows Azure storage pricing has again been reduced by up to 28 percent to remain competitive. In March 2012 Azure lowered its storage pricing by 12 percent. Geo-redundant storage has more than 400 miles of separation between replicas and is the default storage mode.

 Google GB/Mo

 Google Storage pricing

 Amazon S3 pricing Amazon GB/mo   Azure storage pricing - geo-redundant

 Azure storage pricing - local-redundant

 First TB

 $0.095

$0.095

 First TB

 $0.095

$0.070

 Next 9 TB

 $0.085

 $0.080

Next 49 TB 

 $0.080

 $0.065

 Next 90 TB

 $0.075

 

 
 Next 400 TB

 $0.070

     

Source: The Register

RFG POV: HP's Cloud Compute offering for production systems is most notable for its 99.95 percent monthly SLA. Most cloud SLAs are hard to understand, vague and contain a number of escape clauses for the provider. For example, Amazon's EC2 SLA guarantees 99.95 percent availability of the service within a region over a trailing 365 day period – i.e., downtime is not to exceed 250 minutes (more than four hours) over the year period. There is no greater granularity, which means one could encounter a four hour outage in a month and the vendor would still not violate the SLA. HP's appears to be stricter; however, in a NetworkWorld articleHP's SLA only applies if customers cannot access any AZs, according to Gartner analyst Lydia Leong. That means customers have to potentially architect their applications to span three or more AZs, each one imposing additional costs on the business. "Amazon's SLA gives enterprises heartburn. HP had the opportunity to do significantly better here, and hasn't. To me, it's a toss-up which SLA is worse," Leong writes. RFG spoke with HP and found its SLA is much better than portrayed in the article. The SLA, it seems, is poorly written so that Leong's interpretation is reasonable (and matches what Amazon requires). However, to obtain credit HP does not require users run their application in multiple AZs – just one, but they must minimally try to run the application in another AZ in the region if the customer's instance becomes inaccessible. The HP Cloud Compute is not a perfect match for mission-critical applications but there are a number of business-critical applications that could take advantage of the HP service. For the record, RFG notes Oracle Corp.'s cloud hosting SLAs are much worse than either Amazon's or HP's. Oracle only offers an SLA of 99.5 percent per calendar month – the equivalent of 2500 minutes or more than 40 hours of outage per month NOT including planned downtime and certain other considerations. IT executives should always scrutinize the cloud provider's SLAs and ensure they are acceptable for the service for which they will be used. In RFG's opinion Oracle's SLAs are not acceptable at all and should be renegotiated or the platform should be removed from consideration. On the cloud storage front overall prices continue to drop 10 percent or more per year. The greater price decreases are due to the rapid growth of storage (greater than 30 percent per year) and the predominance of newer storage arrays versus older ones. IT executives should be considering these prices as benchmarks and working to keep internal storage costs on a similar declining scale. This will require IT executives to retain storage arrays four years or less, and employing tiering and thin provisioning. Those IT executives that believe keeping ancient spinning iron on the data center floor to be the least cost option will be unable to remain competitive against cloud offerings, which could impair the trust relationship with business and finance executives.

Mainframe Survey – Future is Bright

Jan 9, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

According to the 2012 BMC Software Inc. survey of mainframe users, the mainframe continues to be their platform of choice due to its superior availability, security, centralized data serving and performance capabilities. It will continue to be a critical business tool that will grow driven by the velocity, volume, and variety of applications and data.

Focal Points:

  • According to 90 percent of the 1,243 survey respondents the mainframe is considered to be a long-term solution, and 50 percent of all respondents agreed it will attract new workloads. Asia-Pacific users reported the strongest outlook, as 57 percent expect to rely on the mainframe for new workloads. The top three IT priorities for respondents were keeping IT costs down, disaster recovery, and application modernization.  The top priority, keeping costs down, was identified by 69 percent of those surveyed, up from 60 percent from 2011. Disaster recovery was unchanged at 34 percent while application modernization was selected by 30 percent, virtually unchanged as well. Although availability is considered a top benefit of the mainframe, 39 percent of respondents reported an unplanned outage; however, only 10 percent of organizations stated they experienced any impact from an outage. The primary causes of outages were hardware failures (31 percent), system software failure (30 percent), in-house application failure (28 percent), and change process failure (22 percent).
  • 59 percent of respondents expect MIPS capacity to grow as they modernize and add applications to address business needs. The top four factors for continued investment in the mainframe were platform availability advantage (74 percent), security strengths (7o percent), superior centralized data server (68 percent), and transaction throughput requirements best suited to a mainframe (65 percent). Only 29 percent felt that the costs of migration were too high or use of alternative solutions did not have a reasonable return on investment (ROI), up from 26 percent the previous two years.
  • There remains a continued concern about the shortage of skilled mainframe staff. Only about a third of respondents were very concerned about the skills issues, although at least 75 percent of those surveyed expressed some level of concern. The top methods being used to address the skills shortage are training internally (53 percent), hire experienced staff (40 percent), outsource (37 percent) and automation (29 percent). Additionally, more than half of the respondents stated the mainframe must be incorporated into the enterprise management processes. Enterprises are recognizing the growing complexity of the hybrid data center and the need for simple, cross-platform solutions.

RFG POV: Some things never change – mainframes still are predominant in certain sectors and will continue to be so over the visible horizon, and yet the staffing challenges linger. 20 years after mainframes were declared dinosaurs they remain valuable platforms and growing. In fact, mainframes can be the best choice for certain applications and data serving, as they effectively and efficiently deal with the variety, velocity, veracity, volume, and vulnerability of applications and data while reducing complexity and cost. RFG's latest study on System z as the lowest cost database server (http://lnkd.in/ajiUrY ) shows the use of the mainframe can cut the costs of IT operations around 50 percent. However, with Baby Boomers becoming eligible for retirement, there is a greater concern and need for IT executives to utilize more automated, self-learning software and implement better recruitment, training and outsourcing programs. IT executives should evaluate mainframes as the target server platform for clouds, secure data serving, and other environments where zEnterprise's heterogeneous server ecosystem can be used to share data from a single source, and optimize capacity and performance at a low-cost.

California – Gone Too Far Again

Dec 13, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

California Governor Jerry Brown signed into laws Assembly Bill (AB) 1844, which restricts employers' access to employees' social media accounts, and Senate Bill (SB) 1349, which restricts schools' access to students' social media accounts. Due to the overbroad nature of the laws and the definition of social media, enterprises and schools may have difficulty complying while performing their fiduciary responsibilities.

Focal Points:

  • Although both laws expressly claim they are only regulating "social media," the definitions used in the laws goes well beyond true social media over the Internet. The statutes use the following definition: "social media" means an electronic service or account, or electronic content, including, but not limited to, videos, still photographs, blogs, video blogs, podcasts, instant and text messages, email, online services or accounts, or Internet Web site profiles or locations. In effect, the law governs all digital content and activity – whether it is over the Internet and/or stored in local storage devices on in-house systems.
  • Additionally, AB 1844, which covers employer-employee relationships, restricts employers' access to "personal social media" while allowing business-related access. However, the law does not define what comprises business or personal social media. It assumes that these classifications are mutually exclusive, which is not always the case. There have been multiple lawsuits over the years that have resulted from disagreements between the parties as to the classification of certain emails, files, and other social media.
  • Many organizations inform employees that email and social media activity performed while using the organization's computer systems is open to access and review by the company. Furthermore, some entities have employees sign an annual agreement to such rights. However, the law makes it illegal for employers to ask for login credentials to "personal" accounts and the statute does not allow access to mixed accounts, which supposedly do not exist.

RFG POV: The new California statutes are reminiscent of CA Senate Bill 1386 (SB 1386), which requires any state agency or entity that holds personal information of customers living in the state to divulge any infringement of databases that include personal information, regardless of the business' geographic location. The new laws do more harm than good and allow potential class action civil suits in addition to individual suits. This will make it more difficult for organizations to protect the entity, its image, enterprise data and client/student relationships, and ensure appropriate conduct guidelines and privacy requirements are being met. In addition, the ambiguities in the wording of the laws leave them open to interpretation, which in turn will eventually lead to lawsuits. Business and IT executives can expect these new laws to extend beyond the borders of the state of California, as did SB 1386. IT executives should review the legislation, discuss with legal advisors all elements of the laws, including the definitions, and explore ways to be proactive with their governance, guidelines and processes to prevent worst case scenarios from occurring.

Pages:«123456»

Blog Categories