Browsing articles tagged with " Linux"

Predictions: Tech Trends – part 1 – 2014

Jan 20, 2014   //   by admin   //   Blog  //  No Comments

RFG Perspective: The global economic headwinds in 2014, which constrain IT budgets, will force IT executives to question certain basic assumptions and reexamine current and target technology solutions. There are new waves of next-generation technologies emerging and maturing that challenge the existing status quo and deserve IT executive attention. These technologies will improve business outcomes as well as spark innovation and drive down the cost of IT services and solutions. IT executives will have to work with business executives fund the next-generation technologies or find self-funding approaches to implementing them. IT executives will also have to provide the leadership needed for properly selecting and implementing cloud solutions or control will be assumed by business executives that usually lack all the appropriate skills for tackling outsourced IT solutions.

As mentioned in the RFG blog "IT and the Global Economy – 2014" the global economic environment may not be as strong as expected, thereby keeping IT budgets contained or shrinking. Therefore, IT executives will need to invest in next-generation technology to contain costs, minimize risks, improve resource utilization, and deliver the desired business outcomes. Below are a few key areas that RFG believes will be the major technology initiatives that will get the most attention.

Tech-driven Business Transformation

 

 

 

 

 

 

 

 

 

 

 

 

 

Source: RFG
Analytics – In 2014, look for analytics service and solution providers to boost usability of their products to encompass the average non-technical knowledge worker by moving closer to a "Google-like" search and inquiry experience in order to broaden opportunities and increase market share.

Big Data – Big Data integration services and solutions will grab the spotlight this year as organizations continue to ratchet up the volume, variety and velocity of data while seeking increased visibility, veracity and insight from their Big Data sources.

Cloud – Infrastructure as a Service (IaaS) will continue to dominate as a cloud solution over Platform as a Service (PaaS), although the latter is expected to gain momentum and market share. Nonetheless, Software as a Service (SaaS) will remain the cloud revenue leader with Salesforce.com the dominant player. Amazon Web Services will retain its overall leadership of IaaS/PaaS providers with Google, IBM, and Microsoft Azure holding onto the next set of slots. Rackspace and Oracle have a struggle ahead to gain market share, even as OpenStack (an open cloud architecture) gains momentum.

Cloud Service Providers (CSPs) – CSPs will face stiffer competition and pricing pressures as larger players acquire or build new capabilities and new, innovative open-source based solutions enter the new year with momentum as large, influential organizations look to build and share their own private and public cloud standards and APIs to lower infrastructure costs.

Consolidation – Data center consolidation will continue as users move applications and services to the cloud and standardized internal platforms that are intended to become cloud-like. Advancements in cloud offerings along with a diminished concern for security (more of a false hope than reality) will lead to more small and mid-sized businesses (SMBs) to shift processing to the cloud and operate fewer internal data center sites. Large enterprises will look to utilize clouds and colocation sites for development/test environments and handling spikes in capacity rather than open or grow in-house sites.

Containerization – Containerization (or modularization) is gaining acceptance by many leading-edge companies, like Google and Microsoft, but overall adoption is slow, as IT executives have yet to figure out how to deal with the technology. It is worth noting that the power usage effectiveness (PUE) of these solutions is excellent and has been known to be as low as 1.05 (whereas the average remains around 1.90).

Data center transformation – In order to achieve the levels of operational efficiency required, IT executives will have to increase their commitment to data center transformation. The productivity improvements will be achieved through the use of the shift from standalone vertical stack management to horizontal layer management, relationship management, and use of cloud technologies. One of the biggest effects of this shift is an actual reduction in operations headcount and reorientation of skills and talents to the new processes. IT executives should look for the transformation to be a minimum of a three year process. However, IT operations executives should not expect clear sailing as development shops will push back to prevent loss of control of their application environments.

3-D printing – 2014 will see the beginning of 3-D printing taking hold. Over time the use of 3-D printing will revolutionize the way companies produce materials and provide support services. Leading-edge companies will be the first to apply the technology this year and thereby gain a competitive advantage.

Energy efficiency/sustainability – While this is not new news in 2014, IT executives should be making it a part of other initiatives and a procurement requirement. RFG studies find that energy savings is just the tip of the iceberg (about 10 percent) that can be achieved when taking advantage of newer technologies. RFG studies show that in many cases the energy savings from removing hardware kept more than 40 months can usually pay for new better utilized equipment. Or, as an Intel study found, servers more than four years old accounted for four percent of the relative performance capacity yet consumed 60 percent of the power.

Hyperscale computing (HPC) – RFG views hyperscale computing as the next wave of computing that will replace the low end of the traditional x86 server market. The space is still in its infancy, with the primary players Advanced Micro Devices (AMD) SeaMicro solutions and Hewlett-Packard's (HP's) Moonshot server line. While penetration will be low in 2014, the value proposition for HPC solutions should be come evident.

Integrated systems – Integrated systems is a poorly defined computing technology that encompasses converged architecture, expert systems, and partially integrated systems as well as expert integrated systems. The major players in this space are Cisco, EMC, Dell, HP, IBM, and Oracle. While these systems have been on the market for more than a year now, revenues are still limited (depending upon whom one talks to, revenues may now exceed $1 billion globally) and adoption moving slowly. Truly integrated systems do result in productivity, time and cost savings and IT executives should be piloting them in 2014 to determine the role and value they can play in the corporate data centers.

Internet of things – More and more sensors are being employed and imbedded in appliances and other products, which will automate and improve life in IT and in the physical world. From an data center information management (DCIM), these sensors will enable IT operations staff to better monitor and manage system capacity and utilization. 2014 will see further advancements and inroads made in this area.

Linux/open source – The trend toward Linux and open source technologies continues with both picking up market share as IT shops find the costs are lower and they no longer need to be dependent upon vendor-provided support. Linux and other open technologies are now accepted because they provide agility, choice, and interoperability. According to a recent survey, a majority of users are now running Linux in their server environments, with more than 40 percent using Linux as either their primary server operating system or as one of their top server platforms. (Microsoft still has the advantage in the x86 platform space and will for some time to come.) OpenStack and the KVM hypervisor will continue to acquire supporting vendors and solutions as players look for solutions that do not lock them into proprietary offerings with limited ways forward. A Red Hat survey of 200 U.S. enterprise decision makers found that internal development of private cloud platforms has left organizations with numerous challenges such as application management, IT management, and resource management. To address these issues, organizations are moving or planning a move to OpenStack for private cloud initiatives, respondents claimed. Additionally, a recent OpenStack user survey indicated that 62 percent of OpenStack deployments use KVM as the hypervisor of choice.

Outsourcing – IT executives will be looking for more ways to improve outsourcing transparency and cost control in 2014. Outsourcers will have to step up to the SLA challenge (mentioned in the People and Process Trends 2014 blog) as well as provide better visibility into change management, incident management, projects, and project management. Correspondingly, with better visibility there will be a shift away from fixed priced engagements to ones with fixed and variable funding pools. Additionally, IT executives will be pushing for more contract flexibility, including payment terms. Application hosting displaced application development in 2013 as the most frequently outsourced function and 2014 will see the trend continue. The outsourcing of ecommerce operations and disaster recovery will be seen as having strong value propositions when compared to performing the work in-house. However, one cannot assume outsourcing is less expensive than handling the tasks internally.

Software defined x – Software defined networks, storage, data centers, etc. are all the latest hype. The trouble with all new technologies of this type is that the initial hype will not match reality. The new software defined market is quite immature and all the needed functionality will not be out in the early releases. Therefore, one can expect 2014 to be a year of disappointments for software defined solutions. However, over the next three to five years it will mature and start to become a usable reality.

Storage - Flash SSD et al – Storage is once again going through revolutionary changes. Flash, solid state drives (SSD), thin provisioning, tiering, and virtualization are advancing at a rapid pace as are the densities and power consumption curves. Tier one to tier four storage has been expanded to a number of different tier zero options – from storage inside the computer to PCIe cards to all flash solutions. 2014 will see more of the same with adoption of the newer technologies gaining speed. Most data centers are heavily loaded with hard disk drives (HDDs), a good number of which are short stroked. IT executives need to experiment with the myriad of storage choices and understand the different rationales for each. RFG expects the tighter integration of storage and servers to begin to take hold in a number of organizations as executives find the closer placement of the two will improve performance at a reasonable cost point.

RFG POV: 2014 will likely be a less daunting year for IT executives but keeping pace with technology advances will have to be part of any IT strategy if executives hope to achieve their goals for the year and keep their companies competitive. This will require IT to understand the rate of technology change and adapt a data center transformation plan that incorporates the new technologies at the appropriate pace. Additionally, IT executives will need to invest annually in new technologies to help contain costs, minimize risks, and improve resource utilization. IT executives should consider a turnover plan that upgrades (and transforms) a third of the data center each year. IT executives should collaborate with business and financial executives so that IT budgets and plans are integrated with the business and remain so throughout the year.

The Little Mainframe That Could

Aug 23, 2013   //   by admin   //   Blog  //  No Comments

RFG Perspective: The just-launched IBM Corp. zEnterprise BC12 servers are very competitive mainframes that should be attractive to organizations with revenues in excess of, or expanding to, $100 million. The entry level mainframes that replace last generation's z114 series can consolidate up to 40 virtual servers per core or up to 520 in a single footprint for as low as $1.00 per day per virtual server. RFG projects that the zBC12 ecosystem could be up to 50 percent less expensive than comparable all-x86 distributed environments. IT executives running Java or Linux applications or eager to eliminate duplicative shared-nothing databases should evaluate the zBC12 ecosystem to see if the platform can best meet business and technology requirements.

Contrary to public opinion (and that of competitive hardware vendors) the mainframe is not dead, nor is it dying. In the last 12 months the zEnterprise mainframe servers have extended growth performance for the tenth straight year, according to IBM. The latest MIPS (millions of instructions per second) installed base jumped 23 percent year-over-year and revenues jumped 10 percent. There have been 210 new accounts since the zEnterprise launch as well as 195 zBX units shipped. More than 25 percent of all MIPS are IFLs, specialty engines that run Linux only, and three-fourths of the top 100 zEnterprise customers have IFLs installed. The ISV base continues to grow with more than 7,400 applications available and more than 1,000 schools in 67 countries participate in the IBM Academic Initiative for System z. This is not a dying platform but one gaining ground in an overall stagnant server market. The new zBC12 will enable the mainframe platform to grow further and expand into lower-end markets.

zBC12 Basics

The zBC12 is faster than the z114, using a 4.2GHz 64-bit processor and has twice the maximum memory of the z114 at 498 GB. The zBC12 can be leased starting at $1,965 a month, depending upon the enterprise's credit worthiness, or it can be purchased starting at $75,000. RFG has done multiple TCO studies on the zEnterprise Enterprise Class server ecosystems and estimates the zBC12 ecosystem could be 50 percent less expensive than x86 distributive environments having the equivalent computing power.

On the analytics side, the zBC12 offers the IBM DB2 Analytics Accelerator that IBM says offers significantly faster performance for workloads such as Cognos and SPSS analytics. The zBC12 also attaches to Netezza and PureData for Analytics appliances for integrated, real-time operational analytics.

Cloud, Linux and Other Plays

On the cloud front, IBM is a key contributor to OpenStack, an open and scalable operating system for private and public clouds. OpenStack was initially developed by RackSpace Holdings and currently has a community of more than 190 companies supporting it including Dell Inc., Hewlett-Packard Co. (HP), IBM, and Red Hat Inc. IBM has also added its z/VM Hypervisor and z/VM Operating System APIs for use with OpenStack. By using this framework, public cloud service providers and organizations building out their own private clouds can benefit from zEnterprise advantages such as availability, reliability, scalability, security and costs.

As stated above, Linux now accounts for more than 25 percent of all System z workloads, which can run on zEnterprise systems with IFLs or on a Linux-only system. The standalone Enterprise Linux Server (ELS) uses the z/VM virtualization hypervisor and has available more than 3,000 tested Linux applications. IBM provides a number of specially-priced zEnterprise Solution Editions, including the Cloud-Ready for Linux on System z, which turns the mainframe into an Infrastructure-as-a-Service (IaaS) platform. Additionally, the zBC12 comes with EAL5+ security, which satisfies the high levels of protection on a commercial server.

The zBC12 is an ideal candidate for mid-market companies to act as the primary data server platform. RFG believes organizations will save up to 50 percent of their IT ecosystem costs if the mainframe handles all the data serving, since it provides a shared-everything data storage environment. Distributed computing platforms are designed for shared-nothing data storage, which means duplicate databases must be created for each application running in parallel. Thus, if there are a dozen applications using the customer database, then there are 12 copies of the customer file in use simultaneously. These must be kept in sync as best as possible. The costs for all the additional storage and administration can make the distributed solution more costly than the zBC12 for companies with revenues in excess of $100 million. IT executives can architect the systems as ELS only or with a mainframe central processor, IFLs and zBX for Microsoft Corp. Windows applications, depending on the configuration needs.

Summary

The mainframe myths have misled business and IT executives into believing mainframes are expensive and outdated, and led to higher data center costs and sub-optimization for mid-market and larger companies. With the new zEnterprise BC12 IBM has an effective server platform that can counter the myths and provide IT executives with a solution that will help companies contain costs, become more competitive, and assist with a transformation to a consumption-based usage model.

RFG POV: Each server platform is architected to execute certain types of application workloads well. The BC12 is an excellent server solution for applications requiring high availability, reliability, resiliency, scalability, and security. The mainframe handles mixed workloads well, is best of breed at data serving, and can excel in cross-platform management and performance using its IFLs and zBX processors. IT executives should consider the BC12 when evaluating platform choices for analytics, data serving, packaged enterprise applications such as CRM and ERP systems, and Web serving environments.

Disruptive Changes

Apr 25, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Amazon Inc. and Microsoft Corp. lowered their pricing for certain cloud offerings in attempts to maintain leadership and/or preserve customers. Similarly, Hewlett-Packard Co. (HP) launched its next-generation Moonshot hyperscale servers. Meanwhile, IDG Connect, the demand generation division of International Data Group (IDG), released its survey findings that show there may be a skills shortage when it comes to the soft skills required when communicating beyond the IT walls.

Focal Points:

  • Earlier this month Amazon price reduced the prices it charged for its Windows on-demand servers by up to 26 percent. This brought its pricings within pennies of Microsoft's Windows Azure cloud fees. The price reductions apply across Amazon's standard (m1), second-generation standard (m3), high-memory (m2), and high-CPU (c1) instance families. CEO Jeff Bezos stated in the Amazon annual report the strategy of cutting prices before the company needs to, and developing technologies before there is a financially motivating factor, is what protects the company from unexpected markets shifts. Microsoft has responded by aggressively cutting its prices by 21 to 33 percent for hosting and processing customer online data. In order for customers to qualify for the cuts they must make monthly commitments to Azure for either six or 12 months. Microsoft also is making its Windows Azure Virtual Network technology (codenamed "Brooklyn") generally available effective April 16. Windows Azure Virtual Network is designed to allow companies to extend their networks by enabling secure site-to-site VPN connectivity between the enterprise and the Windows Azure cloud.
  • HP launched its initial Moonshot servers, which use Intel Corp. Atom low-cost, low-energy microprocessors, This next-generation of servers is the first wave of hyperscale software defined server computing models to be offered by HP. These particular servers are designed to be used in dedicated hosting and Web front end environments. The company stated that two more "leaps" will be out this year that will be targeted to handle other specific workloads. HP claims its architecture can scale 10:1 over existing offerings while providing eight times the efficiency. The Moonshot 1500 uses Intel Atom S1200 microprocessors, utilizes a 4.3U (7.5 inch tall) chassis that hosts 45 "Gemini" server cartridges, and up to 1800 quad-core servers will fit into a 42U rack. Other x86 chips from Advanced Micro Devices Inc. (AMD), plus ARM processors from Calxeda Inc., Texas Instruments Inc., and Applied Micro Circuits Corp. (AMCC) are also expected to be available in the "Gemini" cartridge form factor. The first Moonshot servers support Linux, but are compatible with Windows, VMware and traditional enterprise applications. Pricing starts at $61,875 for the enclosure, 45 HP ProLiant Moonshot servers and an integrated switch, according to HP officials. (For more on this topic see this week's Research Note "HP's Moonshot – the Launch.")
  • According to a new study by IDG Connect, 83 percent of European respondents believe there is no IT skills shortage while 93 percent of U.S. respondents definitely feel there is a gap between the technical skills IT staff possess and the skills needed by the respondents' companies. IDG attributes this glaring differentiation to what are loosely defined as "hard" (true technical skills and competencies) and "soft" (business, behavioral, communications, and interpersonal) skills. The European respondents focused on hard skills while their American counterparts were more concerned about the soft skills, which will become more prevalent within IT as it goes through a transformation to support the next-generation data center environments and greater integration with the business. As IT becomes more integrated with the business and operational skill requirements shift, IDG concludes "companies can only be as good as the individuals that work within them. People … are capable of creative leaps of thinking and greatness that surpass all machines. This means that any discussion on IT skills, and any decision on the qualities required for future progression are fundamental to innovation. This is especially true in IT, where the role of the CIO is rapidly expanding within the enterprise and the department as a whole is becoming increasingly important to the entire business. It seems IT is forever teetering on the brink of bigger and better things - and it is up to the people within it to maximize this potential."

RFG POV: IT always exists in a state of disruptive innovation and the next decade will be no different. Whether it is a shift to the cloud, hyperscale computing, software-defined data center or other technological shifts, IT must be prepared to deal with the business and pricing models that arise. Jeff Bezos is correct by not sitting on his laurels and constantly pushing the envelope in pricing and services. IT executives need to do the same and deliver comparable services at prices that appeal to the business while covering costs. This requires keeping current on technology and having the staff on board that can solve the business problems and deliver innovative solutions that enable the organization to remain competitive. RFG expects the staffing dilemma to emerge over the next few years as data centers transform to meet the next generation of business and IT needs. At that time most IT staff will not need the current skills they use but skills that allow them to work with the business, providers and others to deliver solutions built on logical platforms (rather than physical infrastructure). Only a few staff will need to know the nuts and bolts of the hardware and physical layouts. This paradigm shift in staff capabilities and skills must be anticipated if IT executives do not want to be caught behind the curve and left to struggle with catching up with demand. IT executives should be developing their next-generation IT development and operations strategies, determining skills needed and the gap, and then begin a career planning and weaning-out process so that IT will be able to provide the leadership and skills needed to support the business over the next decade of disruptive innovation. Additionally, IT executives should determine if Moonshot servers are applicable in their current or target environments, and if so, conduct a pilot when the time is right. 

HP Cloud Services, Cloud Pricing and SLAs

Jan 9, 2013   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Hewlett-Packard Co. (HP) announced the HP Cloud Compute made generally available in Dec. 2012 while the HP Cloud Block Storage cloud entered beta at that time. HP claims its Cloud Compute has an industry leading availability service level agreement (SLA) of 99.95 percent. Amazon Inc.'s S3 and Microsoft Corp.'s Windows Azure clouds reduced their storage pricing.

Focal Points:

  • HP released word that the HP Cloud Compute moved to general availability on Dec. 5, 2012 and will offer a 99.95 percent monthly SLA (a maximum of 22 minutes of downtime per month). The company extended the 50 percent discount on pricing until January. The HP Compute cloud is designed to allow businesses of all sizes to move their production workloads to the cloud. There will be three separate availability zones (AZs) per region. It supports Linux and Windows operating systems and comes in six different instance sizes, with prices starting at $0.04/hour. HP is currently supporting Fedora, Debian, CentOS, and Ubuntu Linuxes, but not Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES). On the Windows side, HP is live with Windows Server 2008 SP2 and R2 while Windows Server 2012 is in the works. There are sites today on the East and West coasts of the U.S. with a European facility operational in 2013. Interestingly, HP built its cloud using ProLiant servers running OpenStack and not CloudSystem servers. Meanwhile, HP's Cloud Block Storage moved to public beta on Dec. 5, 2012; customers will not be charged until January at which time pricing will be discounted by 50 percent. Users can create custom storage volumes from 1 GB to 2 TB. HP claims high availability for this service as well and claims each storage volume automatically is replicated within the same availability zone.
  • Amazon is dropping its S3 storage pricing by approximately 25 percent. The first TB/month goes from $0.125 per GB/month to $0.095 per GB/month, a 24 percent reduction. The next 49 TB prices per GB/month fall to $0.080 from $0.110 while the next 450 TB drops from $0.095 to $0.070. This brings Amazon's pricing in line with Google Inc.'s storage pricing. According to an Amazon executive S3 stores well over a trillion objects and services 800,000 requests a second. Prices have been cut 23 times since the service was launched in 2006.
  • In reaction to Amazon's actions Microsoft's Windows Azure storage pricing has again been reduced by up to 28 percent to remain competitive. In March 2012 Azure lowered its storage pricing by 12 percent. Geo-redundant storage has more than 400 miles of separation between replicas and is the default storage mode.

 Google GB/Mo

 Google Storage pricing

 Amazon S3 pricing Amazon GB/mo   Azure storage pricing - geo-redundant

 Azure storage pricing - local-redundant

 First TB

 $0.095

$0.095

 First TB

 $0.095

$0.070

 Next 9 TB

 $0.085

 $0.080

Next 49 TB 

 $0.080

 $0.065

 Next 90 TB

 $0.075

 

 
 Next 400 TB

 $0.070

     

Source: The Register

RFG POV: HP's Cloud Compute offering for production systems is most notable for its 99.95 percent monthly SLA. Most cloud SLAs are hard to understand, vague and contain a number of escape clauses for the provider. For example, Amazon's EC2 SLA guarantees 99.95 percent availability of the service within a region over a trailing 365 day period – i.e., downtime is not to exceed 250 minutes (more than four hours) over the year period. There is no greater granularity, which means one could encounter a four hour outage in a month and the vendor would still not violate the SLA. HP's appears to be stricter; however, in a NetworkWorld articleHP's SLA only applies if customers cannot access any AZs, according to Gartner analyst Lydia Leong. That means customers have to potentially architect their applications to span three or more AZs, each one imposing additional costs on the business. "Amazon's SLA gives enterprises heartburn. HP had the opportunity to do significantly better here, and hasn't. To me, it's a toss-up which SLA is worse," Leong writes. RFG spoke with HP and found its SLA is much better than portrayed in the article. The SLA, it seems, is poorly written so that Leong's interpretation is reasonable (and matches what Amazon requires). However, to obtain credit HP does not require users run their application in multiple AZs – just one, but they must minimally try to run the application in another AZ in the region if the customer's instance becomes inaccessible. The HP Cloud Compute is not a perfect match for mission-critical applications but there are a number of business-critical applications that could take advantage of the HP service. For the record, RFG notes Oracle Corp.'s cloud hosting SLAs are much worse than either Amazon's or HP's. Oracle only offers an SLA of 99.5 percent per calendar month – the equivalent of 2500 minutes or more than 40 hours of outage per month NOT including planned downtime and certain other considerations. IT executives should always scrutinize the cloud provider's SLAs and ensure they are acceptable for the service for which they will be used. In RFG's opinion Oracle's SLAs are not acceptable at all and should be renegotiated or the platform should be removed from consideration. On the cloud storage front overall prices continue to drop 10 percent or more per year. The greater price decreases are due to the rapid growth of storage (greater than 30 percent per year) and the predominance of newer storage arrays versus older ones. IT executives should be considering these prices as benchmarks and working to keep internal storage costs on a similar declining scale. This will require IT executives to retain storage arrays four years or less, and employing tiering and thin provisioning. Those IT executives that believe keeping ancient spinning iron on the data center floor to be the least cost option will be unable to remain competitive against cloud offerings, which could impair the trust relationship with business and finance executives.

More Risk Exposures

Nov 30, 2012   //   by admin   //   Blog  //  No Comments

Lead Analyst: Cal Braunstein

Hackers leaked more than one million user account records from over 100 websites, including those of banks and government agencies. Moreover, critical zero-day flaws were found in recently-patched Java code and a SCADA software vendor was charged with having default insecurity, including a hidden factory account with password. Meanwhile, millions of websites hosted by world's largest domain registrar, GoDaddy.com LLC, were knocked offline for a day.

Focal Points:

  • The hacker group, Team GhostShell, raided more than 100 websites and leaked a cache of more than one million user account records. Although the numbers claimed have not been verified, security firm Imperva noted that some breached databases contained more than 30,000 records. Victims of the attack included banks, consulting firms, government agencies, and manufacturing firms. Prominent amongst the data stolen from the banks were personal credit histories and current standing. A large portion of the pilfered files comes from content management systems (CMS), which likely indicates that the hackers exploited the same CMS flaw at multiple websites. Also taken were usernames and passwords. Per Imperva "the passwords show the usual "123456" problem.  However, one law firm implemented an interesting password system where the root password, "law321" was pre-pended with your initials.  So if your name is Mickey Mouse, your password is "mmlaw321".   Worse, the law firm didn't require users to change the password.  Jeenyus!" The group threatened to carry out further attacks and leak more sensitive data.
  • A critical Java security vulnerability that popped up at the end of August leverages two zero-day flaws. Moreover, the revelation comes with news that Oracle knew about the holes as early as April 2012. Microsoft Corp. Windows, Apple Inc. Mac OS X and Linux desktops running multiple browser platforms are all vulnerable to attacks. The exploit code first uses a vulnerability to gain access to the restricted sun.awt.SunToolkit class before a second bug is used to disable the SecurityManager, and ultimately to break out of the Java sandbox. Those that have left unpatched the vulnerabilities to the so-called Gondvv exploit that was introduced in the July 2011 Java 7.0 release are at risk since all versions of Java 7 are vulnerable. Notably older Java 6 versions appear to be immune. Oracle Corp. has yet to issue an advisory on the problem but is studying it; for now the best protection is to disable or uninstall Java in Web browsers. SafeNet Inc. has tagged a SCADA maker for default insecurity. The firm uncovered a hidden factory account, complete with hard-coded password, in switch management software made by Belden-owned GarrettCom Inc. The Department of Homeland Security's (DHS) ICS-CERT advisory states the vendor's Magnum MNS-6K management application allows an attacker to gain administrative privileges over the application and thereby access to the SCADA switches it manages. The DHS advisory also notes a patch was issued in May that would remove the vulnerability; however, the patch notice did not document the change. The vendor claims 75 of the top 100 power companies as customers.
  • GoDaddy has stated the daylong DNS outage that downed many of its customers' websites was not caused by a hacker (as claimed by the supposed perpetrator), but that the service interruption was not the result of a DDoS attack at all. Instead the provider claims the downtime was caused by "a series of network events that corrupted router tables." The firm says that it has since corrected the elements that triggered the outage and has implemented measures to prevent a similar event from happening again. Customer websites were inaccessible for six hours. GoDaddy claims to have as many as 52 million websites registered but has not disclosed how many of the sites were affected by the outage.

RFG POV: Risk management must be a mandatory part of the process for Web and operational technology (OT) appliances and portals. User requirements come from more places than the user department that requested the functionality; it also comes from areas such as audit, legal, risk and security. IT should always be ensuring their inputs and requirements are met. Unfortunately this "flaw" has been an IT shortfall for decades and it seems new generations keep perpetuating the shortcomings of the past. As to the SCADA bugs, RFG notes that not all utilities are current with the Federal Energy Regulatory Commission (FERC) cyber security requirements or updates, which is a major U.S. exposure. IT executives should be looking to automate the update process so that utility risk exposures are minimized. The GoDaddy outage is one of those unfortunate human errors that will occur regardless of the quality of the processes in place. But it is a reminder that cloud computing brings with it its own risks, which must be probed and evaluated before making a final decision. Unlike internal outages where IT has control and the ability to fix the problem, users are at the discretion of outsourced sites and the terms and conditions of the contract they signed. In this case GoDaddy not only apologized to its users but offered customers 30 percent across-the-board discounts as part of their apology. Not many providers are so generous. IT executives and procurement staff should look into how vendors responded to their past failures and then ensure the contracts protect them before committing to use such services.