Browsing articles tagged with " storage"

The SSD Revolution

May 4, 2015   //   by admin   //   Reports  //  No Comments

The SSD Revolution

Predictions: Tech Trends – part 1 – 2014

Jan 20, 2014   //   by admin   //   Blog  //  No Comments

RFG Perspective: The global economic headwinds in 2014, which constrain IT budgets, will force IT executives to question certain basic assumptions and reexamine current and target technology solutions. There are new waves of next-generation technologies emerging and maturing that challenge the existing status quo and deserve IT executive attention. These technologies will improve business outcomes as well as spark innovation and drive down the cost of IT services and solutions. IT executives will have to work with business executives fund the next-generation technologies or find self-funding approaches to implementing them. IT executives will also have to provide the leadership needed for properly selecting and implementing cloud solutions or control will be assumed by business executives that usually lack all the appropriate skills for tackling outsourced IT solutions.

As mentioned in the RFG blog "IT and the Global Economy – 2014" the global economic environment may not be as strong as expected, thereby keeping IT budgets contained or shrinking. Therefore, IT executives will need to invest in next-generation technology to contain costs, minimize risks, improve resource utilization, and deliver the desired business outcomes. Below are a few key areas that RFG believes will be the major technology initiatives that will get the most attention.

Tech-driven Business Transformation

 

 

 

 

 

 

 

 

 

 

 

 

 

Source: RFG
Analytics – In 2014, look for analytics service and solution providers to boost usability of their products to encompass the average non-technical knowledge worker by moving closer to a "Google-like" search and inquiry experience in order to broaden opportunities and increase market share.

Big Data – Big Data integration services and solutions will grab the spotlight this year as organizations continue to ratchet up the volume, variety and velocity of data while seeking increased visibility, veracity and insight from their Big Data sources.

Cloud – Infrastructure as a Service (IaaS) will continue to dominate as a cloud solution over Platform as a Service (PaaS), although the latter is expected to gain momentum and market share. Nonetheless, Software as a Service (SaaS) will remain the cloud revenue leader with Salesforce.com the dominant player. Amazon Web Services will retain its overall leadership of IaaS/PaaS providers with Google, IBM, and Microsoft Azure holding onto the next set of slots. Rackspace and Oracle have a struggle ahead to gain market share, even as OpenStack (an open cloud architecture) gains momentum.

Cloud Service Providers (CSPs) – CSPs will face stiffer competition and pricing pressures as larger players acquire or build new capabilities and new, innovative open-source based solutions enter the new year with momentum as large, influential organizations look to build and share their own private and public cloud standards and APIs to lower infrastructure costs.

Consolidation – Data center consolidation will continue as users move applications and services to the cloud and standardized internal platforms that are intended to become cloud-like. Advancements in cloud offerings along with a diminished concern for security (more of a false hope than reality) will lead to more small and mid-sized businesses (SMBs) to shift processing to the cloud and operate fewer internal data center sites. Large enterprises will look to utilize clouds and colocation sites for development/test environments and handling spikes in capacity rather than open or grow in-house sites.

Containerization – Containerization (or modularization) is gaining acceptance by many leading-edge companies, like Google and Microsoft, but overall adoption is slow, as IT executives have yet to figure out how to deal with the technology. It is worth noting that the power usage effectiveness (PUE) of these solutions is excellent and has been known to be as low as 1.05 (whereas the average remains around 1.90).

Data center transformation – In order to achieve the levels of operational efficiency required, IT executives will have to increase their commitment to data center transformation. The productivity improvements will be achieved through the use of the shift from standalone vertical stack management to horizontal layer management, relationship management, and use of cloud technologies. One of the biggest effects of this shift is an actual reduction in operations headcount and reorientation of skills and talents to the new processes. IT executives should look for the transformation to be a minimum of a three year process. However, IT operations executives should not expect clear sailing as development shops will push back to prevent loss of control of their application environments.

3-D printing – 2014 will see the beginning of 3-D printing taking hold. Over time the use of 3-D printing will revolutionize the way companies produce materials and provide support services. Leading-edge companies will be the first to apply the technology this year and thereby gain a competitive advantage.

Energy efficiency/sustainability – While this is not new news in 2014, IT executives should be making it a part of other initiatives and a procurement requirement. RFG studies find that energy savings is just the tip of the iceberg (about 10 percent) that can be achieved when taking advantage of newer technologies. RFG studies show that in many cases the energy savings from removing hardware kept more than 40 months can usually pay for new better utilized equipment. Or, as an Intel study found, servers more than four years old accounted for four percent of the relative performance capacity yet consumed 60 percent of the power.

Hyperscale computing (HPC) – RFG views hyperscale computing as the next wave of computing that will replace the low end of the traditional x86 server market. The space is still in its infancy, with the primary players Advanced Micro Devices (AMD) SeaMicro solutions and Hewlett-Packard's (HP's) Moonshot server line. While penetration will be low in 2014, the value proposition for HPC solutions should be come evident.

Integrated systems – Integrated systems is a poorly defined computing technology that encompasses converged architecture, expert systems, and partially integrated systems as well as expert integrated systems. The major players in this space are Cisco, EMC, Dell, HP, IBM, and Oracle. While these systems have been on the market for more than a year now, revenues are still limited (depending upon whom one talks to, revenues may now exceed $1 billion globally) and adoption moving slowly. Truly integrated systems do result in productivity, time and cost savings and IT executives should be piloting them in 2014 to determine the role and value they can play in the corporate data centers.

Internet of things – More and more sensors are being employed and imbedded in appliances and other products, which will automate and improve life in IT and in the physical world. From an data center information management (DCIM), these sensors will enable IT operations staff to better monitor and manage system capacity and utilization. 2014 will see further advancements and inroads made in this area.

Linux/open source – The trend toward Linux and open source technologies continues with both picking up market share as IT shops find the costs are lower and they no longer need to be dependent upon vendor-provided support. Linux and other open technologies are now accepted because they provide agility, choice, and interoperability. According to a recent survey, a majority of users are now running Linux in their server environments, with more than 40 percent using Linux as either their primary server operating system or as one of their top server platforms. (Microsoft still has the advantage in the x86 platform space and will for some time to come.) OpenStack and the KVM hypervisor will continue to acquire supporting vendors and solutions as players look for solutions that do not lock them into proprietary offerings with limited ways forward. A Red Hat survey of 200 U.S. enterprise decision makers found that internal development of private cloud platforms has left organizations with numerous challenges such as application management, IT management, and resource management. To address these issues, organizations are moving or planning a move to OpenStack for private cloud initiatives, respondents claimed. Additionally, a recent OpenStack user survey indicated that 62 percent of OpenStack deployments use KVM as the hypervisor of choice.

Outsourcing – IT executives will be looking for more ways to improve outsourcing transparency and cost control in 2014. Outsourcers will have to step up to the SLA challenge (mentioned in the People and Process Trends 2014 blog) as well as provide better visibility into change management, incident management, projects, and project management. Correspondingly, with better visibility there will be a shift away from fixed priced engagements to ones with fixed and variable funding pools. Additionally, IT executives will be pushing for more contract flexibility, including payment terms. Application hosting displaced application development in 2013 as the most frequently outsourced function and 2014 will see the trend continue. The outsourcing of ecommerce operations and disaster recovery will be seen as having strong value propositions when compared to performing the work in-house. However, one cannot assume outsourcing is less expensive than handling the tasks internally.

Software defined x – Software defined networks, storage, data centers, etc. are all the latest hype. The trouble with all new technologies of this type is that the initial hype will not match reality. The new software defined market is quite immature and all the needed functionality will not be out in the early releases. Therefore, one can expect 2014 to be a year of disappointments for software defined solutions. However, over the next three to five years it will mature and start to become a usable reality.

Storage - Flash SSD et al – Storage is once again going through revolutionary changes. Flash, solid state drives (SSD), thin provisioning, tiering, and virtualization are advancing at a rapid pace as are the densities and power consumption curves. Tier one to tier four storage has been expanded to a number of different tier zero options – from storage inside the computer to PCIe cards to all flash solutions. 2014 will see more of the same with adoption of the newer technologies gaining speed. Most data centers are heavily loaded with hard disk drives (HDDs), a good number of which are short stroked. IT executives need to experiment with the myriad of storage choices and understand the different rationales for each. RFG expects the tighter integration of storage and servers to begin to take hold in a number of organizations as executives find the closer placement of the two will improve performance at a reasonable cost point.

RFG POV: 2014 will likely be a less daunting year for IT executives but keeping pace with technology advances will have to be part of any IT strategy if executives hope to achieve their goals for the year and keep their companies competitive. This will require IT to understand the rate of technology change and adapt a data center transformation plan that incorporates the new technologies at the appropriate pace. Additionally, IT executives will need to invest annually in new technologies to help contain costs, minimize risks, and improve resource utilization. IT executives should consider a turnover plan that upgrades (and transforms) a third of the data center each year. IT executives should collaborate with business and financial executives so that IT budgets and plans are integrated with the business and remain so throughout the year.

The Future of NAND Flash; the End of Hard Disk Drives?

Sep 20, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Gary MacFadden

RFG POV: In the relatively short and fast-paced history of data storage, the buzz around NAND Flash has never been louder, the product innovation from manufacturers and solution providers never more electric. Thanks to mega-computing trends, including analytics, big data, cloud and mobile computing, along with software-defined storage and the consumerization of IT, the demand for faster, cheaper, more reliable, manageable, higher capacity and more compact Flash has never been greater. But how long will the party last?

In this modern era of computing, the art of dispensing predictions, uncovering trends and revealing futures is de rigueur. To quote that well-known trendsetter and fashionista, Cher, "In this business, it takes time to be really good – and by that time, you're obsolete." While meant for another industry, Cher's ruminations seem just as appropriate for the data storage space.

At a time when industry pundits and Flash solution insiders are predicting the end of mass data storage as we have known it for more than 50 years, namely the mechanical hard disk drive (HDD), storage futurists, engineers and computer scientists are paving the way for the next generation of storage beyond NAND Flash – even before Flash has had a chance to become a mature, trusted, reliable, highly available and ubiquitous enterprise class solution. Perhaps we should take a breath before we trumpet the end of the HDD era or proclaim NAND Flash as the data storage savior of the moment.

Short History of Flash

Flash has been commercially available since its invention and introduction by Toshiba in the late 1980s. NAND Flash is known for being at least an order of magnitude faster than HDDs and has no moving parts (it uses non-volatile memory or NVM) and therefore requires far less power. NAND Flash is found in billions of personal devices, from mobile phones, tablets, laptops, cameras and even thumb drives (USBs) and over the last decade it has become more powerful, compact and reliable as prices have dropped, making enterprise-class Flash deployments much more attractive.

At the same time, IOPS-hungry applications such as database queries, OLTP (online transaction processing) and analytics have pushed traditional HDDs to the limit of the technology. To maintain performance measured in IOPS or read/write speeds, enterprise IT shops have employed a number of HDD workarounds such as short stroking, thin provisioning and tiering. While HDDs can still meet the performance requirements of most enterprise-class applications, organizations pay a huge penalty in additional power consumption, data center real estate (it takes 10 or more high-performance HDDs to match the same performance of the slowest enterprise-class Flash or solid-state storage drive (SSD)) and additional administrator, storage and associated equipment costs.

Flash is becoming pervasive throughout the compute cycle. It is now found on DIMM (dual inline memory module) memory cards to help solve the in-memory data persistence problem and improve latency. There are Flash cache appliances that sit between the server and a traditional storage pool to help boost access times to data residing on HDDs as well as server-side Flash or SSDs, and all-Flash arrays that fit into the SAN (storage area network) storage fabric or can even replace smaller, sub-petabyte, HDD-based SANs altogether.

There are at least three different grades of Flash drives, starting with the top-performing, longest-lasting – and most expensive – SLC (single level cell) Flash, followed by MLC (multi-level cell), which doubles the amount of data or electrical charges per cell, and even TLC for triple. As Flash manufacturers continue to push the envelope on Flash drive capacity, the individual cells have gotten smaller; now they are below 20 nm (one nanometer is a billionth of a meter) in width, or tinier than a human virus at roughly 30-50 nm.

Each cell can only hold a finite amount of charges or writes and erasures (measured in TBW, or total bytes written) before its performance starts to degrade. This program/erase, or P/E, cycle for SSDs and Flash causes the drives to wear out because the oxide layer that stores its binary data degrades with every electrical charge. However, Flash management software that utilizes striping across drives, garbage collection and wear-leveling to distribute data evenly across the drive increases longevity.

Honey, I Shrunk the Flash!

As the cells get thinner, below 20 nm, more bit errors occur. New 3D architectures announced and discussed at FMS by a number of vendors hold the promise of replacing the traditional NAND Flash floating gate architecture. Samsung, for instance, announced the availability of its 3D V-NAND, which leverages a Charge Trap Flash (CTF) technology that replaces the traditional floating gate architecture to help prevent interference between neighboring cells and improve performance, capacity and longevity.

Samsung claims the V-NAND offers an "increase of a minimum of 2X to a maximum 10X higher reliability, but also twice the write performance over conventional 10nm-class floating gate NAND flash memory." If 3D Flash proves successful, it is possible that the cells can be shrunk to the sub-2nm size, which would be equivalent to the width of a double-helix DNA strand.

Enterprise Flash Futures and Beyond

Flash appears headed for use in every part of the server and storage fabric, from DIMM to server cache and storage cache and as a replacement for HDD across the board – perhaps even as an alternative to tape backup. The advantages of Flash are many, including higher performance, smaller data center footprint and reduced power, admin and storage management software costs.

As Flash prices continue to drop concomitant with capacity increases, reliability improvements and drive longevity – which today already exceeds the longevity of mechanical-based HDD drives for the vast number of applications – the argument for Flash, or tiers of Flash (SLC, MLC, TLC), replacing HDD is compelling. The big question for NAND Flash is not: when will all Tier 1 apps be running Flash at the server and storage layers; but rather: when will Tier 2 and even archived data be stored on all-Flash solutions?

Much of the answer resides in the growing demands for speed and data accessibility as business use cases evolve to take advantage of higher compute performance capabilities. The old adage that 90%-plus of data that is more than two weeks old rarely, if ever, gets accessed no longer applies. In the healthcare ecosystem, for example, longitudinal or historical electronic patient records now go back decades, and pharmaceutical companies are required to keep clinical trial data for 50 years or more.

Pharmacological data scientists, clinical informatics specialists, hospital administrators, health insurance actuaries and a growing number of physicians regularly plumb the depths of healthcare-related Big Data that is both newly created and perhaps 30 years or more in the making. Other industries, including banking, energy, government, legal, manufacturing, retail and telecom are all deriving value from historical data mixed with other data sources, including real-time streaming data and sentiment data.

All data may not be useful or meaningful, but that hasn't stopped business users from including all potentially valuable data in their searches and their queries. More data is apparently better, and faster is almost always preferred, especially for analytics, database and OLTP applications. Even backup windows shrink, and recovery times and other batch jobs often run much faster with Flash.

What Replaces DRAM and Flash?

Meanwhile, engineers and scientists are working hard on replacements for DRAM (dynamic random-access memory) and Flash, introducing MRAM (magneto resistive), PRAM (phase-change), SRAM (static) and RRAM – among others – to the compute lexicon. RRAM or ReRAM (resistive random-access memory) could replace DRAM and Flash, which both use electrical charges to store data. RRAM uses "resistance" to store each bit of information. According to wiseGEEK "The resistance is changed using voltage and, also being a non-volatile memory type, the data remain intact even when no energy is being applied. Each component involved in switching is located in between two electrodes and the features of the memory chip are sub-microscopic. Very small increments of power are needed to store data on RRAM."

According to Wikipedia, RRAM or ReRAM "has the potential to become the front runner among other non-volatile memories. Compared to PRAM, ReRAM operates at a faster timescale (switching time can be less than 10 ns), while compared to MRAM, it has a simpler, smaller cell structure (less than 8F² MIM stack). There is a type of vertical 1D1R (one diode, one resistive switching device) integration used for crossbar memory structure to reduce the unit cell size to 4F² (F is the feature dimension). Compared to flash memory and racetrack memory, a lower voltage is sufficient and hence it can be used in low power applications."

Then there's Atomic Storage which ostensibly is a nanotechnology that IBM scientists and others are working on today. The approach is to see if it is possible to store a bit of data on a single atom. To put that in perspective, a single grain of sand contains billions of atoms. IBM is also working on Racetrack memory which is a type of non-volatile memory that holds the promise of being able to store 100 times the capacity of current SSDs.

Flash Lives Everywhere! … for Now

Just as paper and computer tape drives continue to remain relevant and useful, HDD will remain in favor for certain applications, such as sequential processing workloads or when massive, multi-petabyte data capacity is required. And lest we forget, HDD manufacturers continue to improve the speed, density and cost equation for mechanical drives. Also, 90% of data storage manufactured today is still HDD, so it will take a while for Flash to outsell HDD and even for Flash management software to reach the level of sophistication found in traditional storage management solutions.

That said, there are Flash proponents that can't wait for the changeover to happen and don't want or need Flash to reach parity with HDD on features and functionality. One of the most talked about Keynote presentations at last August's Flash Memory Summit (FMS) was given by Facebook's Jason Taylor, Ph.D., Director of Infrastructure and Capacity Engineering and Analysis. Facebook and Dr. Taylor's point of view is: "We need WORM or Cold Flash. Make the worst Flash possible – just make it dense and cheap, long writes, low endurance and lower IOPS per TB are all ok."

Other presenters, including the CEO of Violin Memory, Don Basile, and CEO Scott Dietzen of Pure Storage, made relatively bold predictions about when Flash would take over the compute world. Basile showed a 2020 Predictions slide in his deck that stated: "All active data will be in memory." Basile anticipates "everything" (all data) will be in memory within 7 years (except for archive data on disk). Meanwhile, Dietzen is an articulate advocate for all-Flash storage solutions because "hybrids (arrays with Flash and HDD) don't disrupt performance. They run at about half the speed of all-Flash arrays on I/O-bound workloads." Dietzen also suggests that with compression and data deduplication capabilities, Flash has reached or dramatically improved on cost parity with spinning disk.

Bottom Line

NAND Flash has definitively demonstrated its value for mainstream enterprise performance-centric application workloads. When, how and if Flash replaces HDD as the dominant media in the data storage stack remains to be seen. Perhaps some new technology will leapfrog over Flash and signal its demise before it has time to really mature.

For now, HDD is not going anywhere, as it represents over $30 billion of new sales in the $50-billion-plus total storage market – not to mention the enormous investment that enterprises have in spinning storage media that will not be replaced overnight. But Flash is gaining, and users and their IOPS-intensive apps want faster, cheaper, more scalable and manageable alternatives to HDD.

RFG POV: At least for the next five to seven years, Flash and its adherents can celebrate the many benefits of Flash over HDD. IT executives should recognize that for the vast majority of performance-centric workloads, Flash is much faster, lasts longer and costs less than traditional spinning disk storage. And Flash vendors already have their sights set on Tier 2 apps, such as email, and Tier 3 archival applications. Fast, reliable and inexpensive is tough to beat and for now Flash is the future.

 

Blog: Data Center Optimization Planning

Dec 13, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Every organization should be performing a data center optimization planning effort at least annually. The rate of technology change and the exploding requirements for capacity demand IT shops challenge their assumptions yearly and revisit best practices to see how they can further optimize their operations. Keeping up with storage capacity requirements with flat budgets can be a challenge in that capacity is growing between 20-40 percent annually. This phenomenon is occurring across the IT landscape. Thus, if IT executives want to transform their operations from spending 70-80 percent of their budgets on operations to more than half the budget spent on development and innovation instead, executives must invest in planning that enables such change.

Optimization planning needs to cover all areas of the data center:

  • facilities,
  • finance,
  • governance,
  • IT infrastructure and systems,
  • processes, and
  • staffing.

RFG finds most companies are greatly overspending due to the inefficiencies of continuing along non-optimized paths in each of the areas; thereby providing companies with the opportunity to reduce operational expenses by more than 10 percent per year for the next decade. In fact, in some areas more than 20 percent could be shaved off.

Facilities.  At a high level, the three areas that IT executives should understand, evaluate, and monitor are facilities design and engineering, power usage effectiveness (PUE), and temperature. Most data center facilities were designed to handle the equipment of the previous century. Times and technologies have changed significantly since then and the designs and engineering assumptions and actual implementations need to be reevaluated. In a similar vein, the PUE for must data centers is far from optimized, which could be resulting in overpaying energy bills by more than 40 percent. On the "easy to fix" front, companies can raise their data center temperatures to normal room temperature or higher, with temperatures in the 80° F range being possible. Just about all equipment built today is designed to operate at temperatures greater than 100° F. For every degree raised organizations can expect to see power costs reduced by up to four percent. Additionally, facilities and IT executives can monitor their greenhouse gas (GHG) emissions, which are frequently tracked by chief sustainability officers and can be used as a measure of savings achieved by IT operational efficiency gains.

Finance.  IT costs can be reduced through use of four key factors: asset management, chargebacks, life cycle management, and procurement. RFG finds many companies are not handling asset management well, which is resulting in an overage of hardware and software being paid for annually. Studies have found this excess cost could easily run up to 20 percent of all expenses for end-user devices. The use of chargebacks better ensures IT costs are aligned with user requirements. This especially comes into play when funding external and internal support services. When it comes to life cycle management, RFG finds too many companies are retaining hardware too long. The optimal life span for servers and storage is 36-40 months. Companies that retain this equipment for longer periods can be driving up their overall costs by more than 20 percent. Moreover, the one area that IT consistently fails to understand and underperforms on is procurement. When proper procurement processes and procedures are not followed and standardized, IT can easily spend 50 percent more on hardware, software and services.

Governance.  The reason governance is a key area of focus is that governance assures performance targets are established and tracked and that an ongoing continuous improvement program is getting the attention it needs. Additionally, governance can ensure that the reasonable risk exposure levels are maintained while the transformation is ongoing.

IT infrastructure and systems.  For each of the IT components – applications, networks, servers, and storage – IT executives should be able to monitor availability, utilization levels, and virtualization levels as well as automation level. The greater the levels the fewer human resources required to support the operations and the more staffing becomes an independent variable, rather than one dependent upon the numbers and types of hardware  and software used. Companies also frequently fail to match workload types to the infrastructure most optimized to those workloads, resulting in overspend that can reach 15-30 percent of operating costs for those systems.

Processes.  The major processes that IT management should be following are application instances (especially CRM and ERP), capacity management, provisioning (and decommissioning) rates, storage tiers, and service levels. The better a company is at capacity planning (and use of clouds) the lower the cost of operations. The faster the provisioning capability the fewer human resources required to support operational changes and the likelihood of less downtime due to human error. Additionally, RFG finds the more storage tiers and automation of movement of data amongst tiers the greater the savings. As a rule of thumb organizations should find the savings as one moves from tier n to tier n+1 to be 50 percent. In addition to tiering, compression and deduplication are other approaches to storage optimization.

Staffing.  For most companies today, staffing levels are directly proportional to the number of servers, storage, network nodes, etc. The shift to virtualization and automatic orchestration of activities breaks that bond. RFG finds it is now possible for hundreds of servers to be supported by a single administrator and tens to hundreds of terabytes handled by a single database administrator. IT executives should also be looking to cross-pollinate staff so that an administrator can support and of the hardware and operating systems.

The above possibilities are what exist today. Technology is constantly improving. The gains will be even greater as time goes on, especially since the technical improvements are more exponential than linear. IT executives should be able to plug these concepts into development of a data center optimization plan and then monitor results on an ongoing basis.

RFG POV: There still remains tremendous waste in the way IT operations are run today. IT executives should be able to reduce costs by more than 40 percent, enabling them to invest more in enhancing current applications and innovation than in keeping the lights on. Moreover, IT executives should be able to cut annual costs by 10 percent per year and potentially keep 40 percent of the savings to invest in self-funding new solutions that can further improve operations.