Browsing articles tagged with " RFG"

IaaS and PaaS: Mature Enough for Financial Services Firms?

Aug 1, 2014   //   by admin   //   Reports  //  Comments Off

RFG Perspective: Since 2008 financial services firms have been under constant pressure to grow revenues and contain costs, which is driving IT executives to invest in cloud computing. Business executives do not value infrastructure per se; consequently, there is a push towards cloud computing to drive down costs as well as to enhance agility. Moreover, IT executives want to be able to implement Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) cloud solutions that are vendor independent and first-of-a-kind implementations that provide portability. These solutions must also satisfy regulators, which, when it comes to compliance and security, is no simple task. However, since cloud computing standards are still nascent, with many conflicting APIs and standards in the works, IT executives must deal with an opportunity/cost challenge: wait for the technology to mature or implement cloud solutions now and risk the need for change.

Business Imperatives:

  • IaaS APIs, solutions and standards are still immature and in a state of flux. Even OpenStack, which claims to offer a robust infrastructure solution, is still initiating new projects (e.g., Murano and Solum) to address additional infrastructure requirements. Unfortunately, each IaaS software methodology is different, causing user software to become solution dependent and hinders elasticity and freedom of movement. Amazon AWS currently provides the best offering but gaps remain. IT executives must develop common standards, taxonomy, and use cases that can be used to push vendors into delivering solutions that meet industry requirements.
  • Which PaaS offering – and whether or not to use it – is heavily dependent upon application characteristics, workload portability, compliance, disaster recovery, and security requirements. The migration to PaaS means IT executives need to address DevOps and lifecycle management, which are not just technology challenges but also culture, people and process paradigm shifts. IT executives must re-evaluate their development lifecycle based on the new PaaS technologies, including defining their baseline requirements and policies for automation, continuous build, and version control.
  • Compliance and security are not the same thing but are frequently intermixed when discussed. Much has been done to address security but compliance gaps exist that must be addressed and the methodologies standardized so that buy-in across the industry can be obtained from the regulators. IT executives must work on a common set of standards that regulators will sign off on and vendors can agree to support.

 

RFG has held a number of cloud forums for IT executives and senior architects of the major financial institutions over the past few months in New York City and London. This research note summarizes the discussions, findings, and desired actions required for cloud computing to become an operating standard and penetrate the financial firms in a cohesive, coordinated way.

CTO Panels

CEOs, CTOs and other IaaS top vendor executives from Canonical, CloudScaling, IBM/SoftLayer, SolidFire, SunGard Availability Services, and SwiftStack provided their views on the status and trends for IaaS. They all agreed there is a lot of work to be done before there are common APIs and standards that users could use that would allow portability and facilitate agility and scalability. One vendor executive postulated that in five years it may be possible for hybrid clouds to reach the point where cloud environments can be seamless with common APIs and security policies. One challenge for user executives is that some applications are infrastructure-aware while others are not. For true independence and flexibility, this awareness must be eliminated or be resolvable through dependency or policy mappings.

Development teams, especially DevOps staff, should not need to know infrastructure, just be able to address orchestration and policies. In response to the question on compliance and security, it was agreed that the responsibility for policy enforcement, security and governance belongs in each component of the stack. A need for software-defined compliance was also addressed, with the consensus that it needs to be built in at the start, not after the fact. IT executives were advised to contemplate two kinds of clouds: a virtualization cloud for legacy applications with the aim of improving cost efficiencies; and an innovation cloud designed to help developers get new applications to market faster. Cloud architects must be able to stitch these clouds together.

A second panel consisting of CEOs, CTOs and other PaaS vendor executives from ActiveState, Citrix, GigaSpaces, Mirantis, and MuleSoft offered their opinions on PaaS status and trends. All agreed that PaaS and IaaS layers are not blurring but the application and platform layers are and it will get even more blurred as vendors add more layers and build more higher order services. Nonetheless, all PaaS frameworks should run on any IaaS layer. The issue of DevOps arose again with executives pointing out that DevOps is not just a technology issue; it must also address policies (like security), processes, and cultural change. Developers need to rethink their roles and focus more on orchestration of services than on purely writing code.

Vendors conceded IaaS and PaaS solutions are still immature and suggested IT executives view the use of clouds as an opportunity/cost analysis problem. IT executives and their firms can wait until the technology matures or can invest now, shoot for first mover advantage, and risk the need for change when standards emerge that are not consistent with their implementations. The rewrite risk was postulated to be less expensive than the risk from market losses to competitors.

The panel discussed the requirement for common APIs and workload affinity and portability. While there was agreement on the need for common APIs, there was disagreement on the right level of abstraction for the APIs. All agreed workload affinity will apply to PaaS platforms, which means IT executives will need to determine which workloads apply to what PaaS offerings before attempting to migrate workloads. Successful PaaS solutions will allow for application portability on- or off-premise. The movement towards use of composable elements will enable this capability. The challenge will be the mapping of application services across divisions or organizations, as even file movements look different across organizations. The panel voiced support for software defined solutions, including software defined operators.

IaaS

In the IaaS track IT executives and architects expressed that there is no winning solution out there yet. Amazon AWS, Docker, KVM, OpenStack, Rackspace, Ubuntu, VMware, and Xen are amongst the cloud solutions in use. An AWS architect voiced the opinion that more banks use AWS than other solutions because the company works more closely with customers to meet their unique banking requirements than the others. For example, users expressed Amazon got it right when it bolted down certain components, like the hypervisor, while showing more flexibility elsewhere. However, it was clear from forum discussions that AWS’s early dominance is no guarantee that it will remain the 800 pound cloud gorilla.

One IT executive expressed use of IaaS could solve the hygiene and maintenance issues while simultaneously driving down the cost of infrastructure maintenance and support. IT executives could view IaaS solutions as disposable – i.e., bugs are not fixed and upgrades not applied but platforms are discarded and new ones provisioned. Smartphones use this concept today and it could be a transformative approach to keeping infrastructure software current.

Operations movement to the cloud represents a paradigm shift for the development cycle and developers. Business executives are not enthused with paying for infrastructure costs, as it impairs margins and does not drive revenue. Thus, it behooves organizations to standardize on cloud platforms that can provide agility, portability, scalability, security and cost containment rather than have each application locked into its own infrastructure. This is a 180 degree shift in how the process is done today. IT operations executives need to convince senior management and development executives to change the development culture. However, all agree that this is most likely an 80/20 rule – 80 percent of the time a few IaaS platforms apply and 20 percent of the time uniquely modified platforms may be needed for the enterprise to differentiate itself to have a competitive advantage and make money.

Lastly, there was consensus amongst users and vendors that there is a need for, at minimum, de facto standards and a common taxonomy. The areas to cover are those currently found in the AWS implementation plus audit, federation, orchestration, and software distribution. The group wants to move forward with a focus on the areas of audit and compliance first, using use cases as the baseline for developing requirements.

PaaS

It became clear early on in the information exchange that PaaS means different things to different people – even within a company. There are PaaS offerings for analytics, databases, and disaster recovery as well as online transaction processing, for example, which can be self-service, pay-as-you-go, and on demand. The platforms may have different requirements for availability, compliance, orchestration, scalability, security, and support. Some PaaS solutions are designed for DevOps while others are architected for legacy processes and applications.

The executives chose to focus on business- and mission-critical applications and the solutions employed, such as AWS Cloud Formation, Pivotal Software’s Cloud Foundry, GitHub, Heat, Jenkins, Murano, OpenShift, Puppet, Solum, and Trove. As can be noted, the discussion went beyond just the PaaS platform to application life cycle management. One conclusion was that IT operations executives should keep in mind that when talking to their development counterparts, the development requirements lists are more flexible than most claim or developers would not be able to use AWS. This bodes well for moving to standardized cloud platforms and away from development defining systems rather than requirements. However, in the near term, application dependencies will be a major problem that users and vendors will have to solve.

One of the executives warned that PaaS has a long way to mature and that one component not currently present but desirable will be graphics/visualization. He expects visualization tools to simplify the creation of workflow diagrams and the underlying processes. Since this parallels what has occurred in other areas of process automation, RFG believes it is highly likely that these types of tools will materialize over the long term.

When it became apparent that PaaS is more about the application life cycle than the platform itself, DevOps and life cycle management became the prime topic of discussion. Executives envisioned a PaaS solution that supported the development process from the PC development platform to production to future releases. However, implementation of standardized platforms and DevOps implies a transformational change in the development process. This does not imply eliminating choice; but choices should be limited without stifling innovation. Developers will need to be taught how to rapidly move applications through the development cycle through the use of automated tools. There are platform tools that can watch a repository, see a commit, check it out, run Jenkins on it, go through the quality assurance cycle and go live after it gets a “green light.” This automated process can shrink the development time from months to minutes.

There was a consensus for a need to re-define the development lifecycle based on the new PaaS technologies, including delineating the baseline requirements and policies for automation, continuous build, and version control.

Compliance and Security

Initially attendees did not think there was much value in discussing compliance and security for cloud computing. But comments from an IBM security CTO got them rethinking their positions. She stated IBM is completely rewriting its internal security policies to accommodate cloud computing. Everyone needs to start over and rethink security architecture and controls, especially secondary and tertiary levels. The risks have changed and are changing more rapidly as time goes on. Therefore, auditable controls and security must be built in upfront not as an afterthought. This needs to be done on a global basis to contain costs. All units in an enterprise need to be on the same page at each point in time and maintain the trajectory of heading in the same direction for it to be successful.

Compliance is a different matter. While everyone is addressing security in some measure, compliance lacks common global standards for infrastructure, platforms and applications. FFIEC rules and ISO 22002 standards must be met along with NIST, FedRAMP, and non-U.S. standards in the countries in which the financial institutions operate. It is possible that there is 80 percent overlap but the standards must be mapped and addressed. One of the financial services firms has already mapped the FFIEC rules back to ISO 22002 but the rest has not been addressed. Once the appropriate compliance requirements are mapped, commonalities determined and gaps addressed, users can go to the regulators to request approval and can ask cloud providers to include the de facto standards in their offerings. The group agreed that a working group should consolidate current standards and guidelines, creating a document that can be agreed upon and taken to regulators for acceptance.

Additionally, IT executives need to ensure cloud service provider (CSP) contracts have provisions for certification and/or responsibility for controls. CSPs must take responsibility from a regulatory view, if they expect financial firms to be comfortable using their services. The contracts must also clearly call out the roles and responsibilities of both parties and the process for hand offs.

Common Architecture

The purpose of a common architecture is to enable application portability across platforms within an enterprise as well as for bursting out to private or public clouds to handle peak loads. This is not to suggest all cloud platforms support all applications. But for those platforms where there is workload affinity for a certain application set, the ability to move from one instance to another should be a simple task. The goal should be “one click” portability that gives almost instantaneous movement to another instance anywhere in the cloud or expansion that adds instances and allows for hybrid cloud environments.

The IT executives and architects concurred that the common architecture vision is a concept that may become a reality in the long-term but will require mature standards first. A discussion arose on the topic of whether Amazon AWS APIs and standards could be used as a baseline. However, Amazon pointed out that the APIs are copyrighted and approval would be required first. Amazon will look into the possibility of getting approval. In the meantime, the users agreed that the financial services firms will not wait for the availability of a common architecture but invest now to meet their business needs. As a next step, the group agreed to start developing the commonalities for the architecture.

Summary

The IaaS and security groups agreed to a joint effort to review the many overlapping Compliance standards for commonalities and reduce that list to a bare-essential set of requirements. Essential security elements will be added to the list. The PaaS group wants to use a declarative approach to indicating all the resources, and policies amongst the resources and for the PaaS platform. From a developer life cycle perspective, the group wants to include, in a declarative template, the various components of the application development life cycle. Amongst the items to address are the release levels including continuous builds and testing.

IT executives suggested including the OCC in the next session to provide regulators with guidance on financial institutions’ direction for APIs, de facto standards, reference architectures and frameworks, and to perhaps influence the regulators’ direction accordingly. Users also asked for the adoption of a mechanism for keeping track of workgroup progress and communicating that to other Forum members. Suggestions include tools used by other standards organizations and working groups.

In sum, the comments and conclusions of the IT executives and architects in the cloud forums are indicative of the challenges, requirements and directions of the top U.S. and global financial institutions. But the executives believe the time and resources spent in the development of requirements and standards will be worth it.

 

RFG POV: Financial Services firms are committed to moving to cloud platforms, both on-premise and in private and public clouds. Since they will not be waiting for the IaaS and PaaS offerings to mature, there is a strong commitment to work together to create baseline APIs, requirements and standards that can be frameworks for the financial institutions, regulatory agencies, and cloud vendors. These frameworks should enable the firms to reduce costs, drive cost efficiencies, achieve a level of vendor independence, and simplify compliance with regulatory requirements. Moreover, the frameworks should be applicable to other industries and enable any large enterprise to more easily and rapidly take advantage of cloud computing. IT executives should approach their move to the cloud strategically by defining their policies, frameworks, guidelines, requirements and standards, and performing opportunity/cost analyses first before committing to one or more target cloud architectures and implementations.

Grape Escape Showcases Apparancy and SYSPRO

Jul 22, 2014   //   by admin   //   Blog  //  Comments Off

RFG Perspective: Cost efficiencies, elimination of redundancy, and delivery of timely accurate information to users anywhere, anytime and on any device remains a top priority across the business landscape. In the manufacturing and distribution sectors U.S. business executives in small- and medium-sized (SMBs) companies have struggled like Sisyphus and the boulder to maintain their organizations; many have been snuffed out entirely. A new survey showing that manufacturing in the US is on the rise should spur cautious optimism among business executives. However, now more than ever these businesses need business process management (BPM) and/or enterprise resource planning (ERP) solutions to remove cost and redundancy and deliver just in time and timely information to executives and their staff wherever, whenever and on whatever device. In the healthcare sector the passage of the Affordable Care Act has been met with both criticism and praise. Its future is uncertain. What is certain, however, is that the Veterans Administration scandal has focused the lens on a persistent, growing problem: Veterans have to file a morass of forms to claim benefits they both need and deserve. The implementation of innovative technologies aimed at untangling and simplifying Veterans’ benefits claims and scheduling processes as well as a cultural change that supports the technology would be a giant leap forward for these praiseworthy and selfless individuals.

 

The JRocket Marketing Grape Escape ® 2014 provided industry analysts with a rare insider’s peek at two of today’s innovative, nimble, and multi-faceted technology vendors. The three-day event was a tour de force that showcased Apparancy and SYSPRO, two disruptive leading-edge companies that are reshaping their industry sectors.

 

Apparancy

Apparancy is delivering on its Know. Do. Prove. value proposition with an automated business process platform that initially aims at helping healthcare organizations connect multiple existing systems and data sources to achieve specific goals. At this year’s event, Karen Watts, CEO of Apparancy, expounded on how Apparancy can help these organizations identify disjointed workflows, and eliminate redundancies and data overlap, by combining data and processes into single-purpose role-based views that eliminate the need to rip and replace.

The big news was Ms. Watts’ announcement – appropriately – on Memorial Day that Apparancy had acquired usage rights to an earlier software product “TurboVet,” which had already catalogued some five thousand plus Veteran’s Administration forms.  Apparancy has begun work on updating and integrating these forms into its platform with the end game of launching VetApprove in Q4 2014. If necessity is the mother of invention then, Apparancy, powered by Corefino, is filling a market vacuum with its VetApprove Veterans benefit product. VetApprove will revolutionize the way Veterans will be able to access their entitled benefits.

VetApprove will enable 22 million Veterans to apply for entitled healthcare, employment, education, state compensation, and disability benefits, as well as for other entitlement programs such as funeral benefits extended to spouses of veterans. This service will be offered to veterans free of charge. Additionally, it can become the underlying workflow management platform that would enable the VA to efficiently process applications, schedule services, and monitor and manage its operations, which are still antediluvian and lack accurate measurement metrics.

 

SYSPRO

 Joey Benadretti, President of SYSPRO USA, announced a turnabout in US manufacturing trends. Citing MAPI survey findings, Mr. Benadretti pointed to a potential upswing in the future of US manufacturing. The study, which covered the period from 2006-2012, showed 19 states experiencing double-digit growth above the national average with the majority of those in the western states. He also pointed out that US manufacturing is moving from Mexican Border States (except Texas) to those states that are closer to the Canadian border. Output in two sectors is also accelerating and to address these trends SYSPRO is expanding into the automotive and energy manufacturing sub-industries.

On the product side the company’s new SYSPRO Espresso provides an enterprise mobile ecosystem that can be tailored to satisfy front-end and back-end requirements. Features of the highly anticipated SYSPRO Espresso will include new drag and drop technology and mass customization for any device with an emphasis on being device agnostic. This ground-breaking technology supports single sign-on, is device-agnostic, and allows users to access multiple applications on multiple devices using a roaming profile so they can switch from one device to another and instantly connect. This creates an advantage for both the customer as well as SYSPRO, as the millennial generation will want to access ERP on their mobile devices.  These tech-savvy users will also want to customize which apps they will want to see on their mobile phones, tablets, etc. due to the device real estate.

Not surprisingly, SYSPRO currently has one of the highest customer retention rates in the industry. Mr. Benadretti confidently remarked that his company will remain on the cutting edge of technology providing customers with product flexibility and low-cost solutions.

 

RFG POV: Apparancy and SYSPRO unveiled substantive, cutting edge, and innovatively disruptive technology solutions. Apparancy’s targeted focus will enable the beleaguered VA to begin to meet the urgent needs of its Veterans, while SYSPRO is enabling manufacturing executives to meet customer demands as the industry undergoes an uptick in growth and a geographic shift. Business, government and IT executives should proactively harness spot-on technology solutions to solve exigent business problems, respond expeditiously to clients, and manage change well into the future so that their organizations continue to satisfy customers and remain relevant as markets evolve.

Additional relevant research and consulting services are available. Interested readers should contact Client Services to arrange further discussion or interview with Ms. Maria DeGiglio, MA, and Principal Analyst.

Cybersecurity and the Cloud Multiplier Effect

Jul 11, 2014   //   by admin   //   Blog  //  Comments Off

RFG Perspective: While corporate boards grapple with cybersecurity issues and attempt to shore up their defenses, the inclusion of cloud computing models into the equation are increasing the risk exposure levels. Business and IT executives should work together to aggressively establish processes, procedures, and technology that will minimize the risk exposures to levels deemed acceptable. Additionally, senior executives and Boards of Directors need to play a more active roll in the accountability and governance of cybersecurity by discussing and addressing challenges, issues and status at least quarterly.

An article on the front page of the Wall Street Journal on June 30, 2014 discussed corporate boards racing to shore up cybersecurity. It alluded to a number of corporate boards waking up to cyber threats and worrying that hackers would steal company know-how and intellectual property (IP). In the first half of 2014 1,517 NYSE- or NASDAQ-traded companies listed in their securities filings references to some form of cyber attack or data breach – almost a 20 percent increase from the previous year. In all of 2013 1,288 such filing comments were made whereas in 2012 only 879 companies reported cyber statements. This is good and bad news – good that cybersecurity is getting CEO and Board attention and bad news in that executives are belatedly waking up to an endemic problem.

Fiduciary Responsibility

The Board and CEO have a fiduciary responsibility to shareholders to protect the company’s assets from undue risks. It is not something that can be assigned and then ignored. Yet that is what has happened at many companies over the years. They must be involved in cybersecurity governance and decision-making on an ongoing basis and not shunt it off to Chief Risk Officers (CROs), Chief Security Officers (CSOs or CISOs) and/or IT executives. CEOs and other senior executives should also ensure privacy and security programs are aligned with each business unit’s requirements and that the risk probability and exposures are reasonably known and reduced to an acceptable level. It is important that all parties understand that zero security risks are not possible anymore (nor would the expense be worth it if attainable); what is important is to agree upon what level of risk exposure is acceptable, budget for it, and implement initiatives to make it happen.

At the Board level there should be a risk committee that is responsible for all risk management, including cyber risk. Moreover, best practices suggest Boards should, as a minimum, address the following five areas:

  • regularly reviews and approves top-level policies on privacy and IT security risks
  • regularly reviews and approves roles and responsibilities of lead personnel responsible for privacy and IT security
  • regularly reviews and approves annual budgets for privacy and IT security programs separate from IT budgets
  • regularly reviews and approves cyber insurance coverage
  • regularly receives and acts upon reports from senior management regarding privacy and IT security risk exposures.

These efforts can be done by the full Board or by a risk committee that reports to the Board. Some Boards may have assigned this role to the audit committee but, while it is good that it is addressed, it is not a perfect fit.

Cloud Multiplier Effect

In June the Ponemon Institute LLC published a report on the cloud multiplier effect. The firm surveyed 613 IT and IT security practitioners in the U.S. that are familiar with their companies’ usage of cloud services. The news is not good. Because most respondents believe cloud security is an oxymoron and certain cloud services can result in greater exposures and more costly breaches, the use of cloud services multiplies the breach costs by a factor between 1.38 and 2.25. The top two impacts are from cloud breaches involving high value IP and the backup and storage of sensitive or confidential information, respectively. Most respondents believe corporate IT organizations are not properly vetting cloud platforms for security, are not proactively assessing information to ensure sensitive or confidential information is not in the cloud, and are not vigilant on cloud audits or assessments.

Moreover, disturbingly, almost 75 percent of respondents believe their cloud services providers would not notify them immediately if they had a data breach involving the loss or theft of IP or business confidential information. Almost two-thirds of those surveyed expressed concern that their cloud service providers are not in full compliance with privacy and data protection laws – and this is in the U.S. where the rules are less strict than the EU. Furthermore, respondents feel there is a lack of visibility into the cloud as it relates to applications, data, devices, and usage.

 

Summary

 

Boards, CEOs and senior non-IT management need to become more aware of their cybersecurity exposures and actively participate in minimizing the risks. IT executives, on the other hand, need to present the challenges, status and trends in a more business, less technical manner, including recommendations, so that the other executives can appreciate the issues and authorize the appropriate actions. As the Ponemon study shows, the challenges go beyond the corporate four walls into clouds they have no control over. IT executives need to become involved in the selection and vetting of cloud services providers. Furthermore, business and IT executives must work together and build strong governance practices to minimize cybersecurity risks.

RFG POV: Cybersecurity risk exposures are increasing and collectively executives are falling short in their fiduciary responsibilities to protect company assets. Boards, CEOs and other senior executives must take their accountability seriously and play a more aggressive role in ensuring the risk exposures to corporate assets are known and within acceptable levels. For most organizations this will be a major cultural change and challenge and will require IT executives to proactively step forward to make it happen. IT executives should collaborate with board members, senior executives, and outside compliance services providers to establish a program that will enable executives to establish a governance methodology that monitors and reports on the risks and provides cost/benefit analyses of alternative corrective actions. Moreover, at a minimum, corporate executives must review the governance materials quarterly, and after critical risk events occur, and take appropriate actions.

 

Cloudify from Gigaspaces

Mar 31, 2014   //   by admin   //   Reports  //  Comments Off

Cloudify from GigaSpaces – 3-21-14

Cyber Security Targets

Mar 24, 2014   //   by admin   //   Blog  //  Comments Off

RFG Perspective: While the total cost of the cybersecurity breach at Target will not be know for quite a while, a reasonable estimate is that it could easily cost the company more than $500 million. The price tag includes bills associated with fines from credit card companies, other fines and lawsuits for non-compliance, services such as free credit card report monitoring for its impacted 70 -110 million customers, and discounts required to keep customers coming in the door. These costs far exceed the IT costs associated with better cybersecurity prevention. Target is not alone; it is just the latest in a long line of breaches that have taken major tolls on the attacked organization. Business and IT executives need to recognize that attackers and hackers will constantly change their multi-pronged sophisticated attack strategies as they attempt to stay ahead of the protections installed in the enterprises. IT executives need to be constantly aware of the risk exposures and how they are changing, and continue to invest in measured, integrated cybersecurity solutions to close the gaps.

The Target cyber breach represents a new twist to the long-standing cybersecurity challenge. Unlike most other attacks that came through direct probes into the corporate network or through employee social-engineered emails, spear phishing, or multi-vectored malware aimed at IT software, the Target incident was an Operations Technology (OT) play. One reason for this may be that the vendor patch rate has improved and successes of zero-day exploits are dropping. Of course, it could also be that the misguided actors were clever enough to try a new attack vector.

IT vs OT

Most IT executives and staff give little thought to OT software, usually referred to as SCADA (supervisory control and data acquisition) software. These are industrial control systems that monitor and control things such as air conditioning, civil defense systems, heating, manufacturing lines, power generation, power usage, transmission lines, and water treatment. IT (outside of the utilities industry) tends to treat these systems and the associated software as outside of their purview. This is no longer true. Cyber attackers are constantly upping the ante and now they have begun going after OT software in addition to traditional attack vectors. IT executives and security personnel need to become actively engaged in ensuring the organization is protected against these types of threats.

Incident Attack Types

In 2013 according to the IBM X-Force Threat Intelligence Quarterly 1Q2014, the top three disclosed attack types are distributed denial of service (DDoS), SQL injections, and malware. These three vectors account for 43 percent of 8,330 vulnerability disclosures while another 46 percent of attack types remain undisclosed. (See below chart from the IBM report.) The report also points out that Java vulnerabilities continue to rise year-over-year with them tripling in the last year alone. Fully half of the exploited application vulnerabilities were Java based, with Adobe Reader and Internet browsers accounting for 22 and 13 percent respectively. Interestingly, mobile devices excluding laptops have yet to be a major threat attack point.

most common attack types

Currency

Another common pressure point on IT organizations is keeping current with all the security patches authorized by software providers. The good news is that vendors and IT organizations are doing a better job applying patches. The overall unpatched publicly-disclosed vulnerability rate dropped from 41 percent in 2102 to 26 percent in 2013. This is great progress but still much remains to be done, especially by enterprise IT. The amount of patches to be applied on an ongoing basis can be overwhelming and many IT organizations cannot keep up, especially with quick fixes. Thus, zero-day exploits still remain major threats that IT needs to mitigate.

Playing Defense

The challenge for IT CISOs and security staff increases every year as the number and types of actors attempting to gain access to IT systems continues to grow as do the types of attacks. Therefore, enterprises must reduce their risk exposure by using monitoring and blocking software that can rapidly detect problems almost as they occur and shut off attacks immediately before the exposure becomes too large. Additionally, staff must fine-tune access controls and patch known vulnerabilities quickly so as to (virtually) eliminate the ability for criminals to exploit holes in infrastructures. Security executives and staff should work collaboratively with others in their field and share information about attacks, defenses, meaningful metrics, and trends. IT executives should ensure security personnel are continually trained and aware of the latest trends and are implementing the appropriate defenses as rapidly as possible. As people are one of the weakest links in the security chain, IT executives should also ensure all employees are aware of company privacy and security policies and procedures and are judiciously following them.

RFG POV: IT executives and cyber security staff remain behind the curve in protecting, exfiltrating, discovering, and containing cyber security attacks and data breaches. There are some low-hanging initiatives IT can execute to close some of the major vulnerabilities such as blocking troublesome IP addresses at the perimeter outside the firewall and employing enhanced software monitoring tools that can spot and alert security of suspect software. Additionally, staff can improve password requirements, password change frequency, two-factor authentication, inclusion of OT software, and rapid deactivation of access (cyber and physical) to terminated employees. Encryption of data at rest and in transit should also be evaluated. However, IT are not the only ones on the line for corporate security – the board of directors and corporate executives share the fiduciary burden for protecting company assets. IT executives should get boards and corporate executives to understand the challenges, establish the acceptable risk parameters, and play an ongoing role in security governance. IT security executives should work with appropriate parties to collect, analyze, and share incident data so that defenses and detection can be enhanced. IT executives should also recognize that cyber security is not just about technology – the weakest links are the people and processes. These gaps should be aggressively pursued and the problems regularly communicated across the organization. The investment in these corrective actions will be far less than the cost of fixing the problem once the damage is done.

Have You Been Robbed on the Last Mile of Sales?

Feb 16, 2014   //   by admin   //   Blog  //  Comments Off

It is a fair question, whether you are the seller or the customer. OK, so what is the last mile of sales? I didn’t find an official definition, so I’m borrowing the concept from “the last mile of finance” between balance sheet and 10-k, and the “last mile of telecommunications” that is the copper wire from the common carrier’s substation to your home or business. Let’s call the last mile of sales

that part of the sales funnel in which prospects are ready to become customers, or are already customers, ready for up-selling and cross-selling.

 

Survey Participants were robbed on the Last Mile of Sales!

This Tuesday morning, I looked at our “Poor Data Quality – Negative Business Outcomes” survey results and I noticed a surprising agreement among participants in one sales-related area. 126 respondents, or over 90%of those responding to our question about poor data quality compromising up-selling and cross-selling, indicated they had such a problem. The graph following gives you a sense of how large a percentage of respondents had lost sales opportunities.

robbed on the last mile of sales

  This is a troubling statistic. Organizations spend huge sums on marketing programs designed to attract prospects and nurture them to become customers. Beyond direct monetary investment, ensuring a successful trip down the sales funnel takes time, effort, and ability. From the perspective of the seller, failing to sell more products and services to an robbed on the last mile of salesexisting (presumably happy) client is like being robbed on the last mile of sales. Your organization has already succeeded in making a first sale. Subsequent selling should be easier, not harder. From the perspective of the buyer, losing confidence in your chosen vendor because they fail to know you and your preferences, confuse you with similarly named customers, or display inept record-keeping about their last contact with you, robs you of a relationship you had invested time and money in developing. Perhaps now your go-to vendor becomes your former vendor, and you must spend time seeking an alternate source. Once confidence has been shaken, it is difficult to rebuild.

What did the survey say?

How is it possible that more than 90% of our respondents to this question lost an opportunity to up-sell or cross-sell? The next chart tells the story. It is poor data quality, plain and simple. robbed on the last mile of sales You can read the results yourself. As a sales prospect for a lead generation service, I had a recent experience with at least one of the top four poor data quality problems.

Oops, the status wasn’t updated after our last call

In the closing months of 2013, I was solicited by a lead generation firm. I asked them to contact me in the first quarter of 2014. Ten days into 2014, they called again. OK, perhaps a bit early in the quarter, but they are eager for my business. With no immediate need, I asked them to call me again in Q3-2014 to see how things were evolving. So, I was surprised when I received another call from that firm, yesterday. Had we traveled through a time-warp? Was it now mid-summer? A look out the window at the snowstorm in progress suggested it was still February 2014. The caller was the same person as last time, and began an identical spiel. I interrupted and mentioned we had only spoken a week earlier. The caller appeared to remember and agree, indicating that there was no status update about the previous call. Was this sloppy ball-handling by sales, an IT technology issue, an ill-timed database restore? Was this a 1:1,000,000 chance or an everyday occurrence? The answer to all of those questions is “I have no idea, but I don’t want to trust these folks with managing my lead generation campaign”. If they can’t handle their own sales process, how are they going to help me with mine? What ever the cause of the gaff, they robbed themselves of a prospect, and me of any confidence I might have had in them.

The Bottom Line

Being robbed on the last mile of sales by poor data quality is unnecessary, but all too common. Have you recently been robbed on the last mile of sales? Are you a seller, or a disappointed prospect or customer? Cal Braunstein of The Robert Frances Group and I would like to hear from you. Please do contact me to set up an appointment for a conversation. Whether you have already participated in our survey, are a member of the InfoGov community, or simply have an enlightening experience about how poor data quality caused you to have a negative business outcome, reach out and let us know.

Published by permission of Stuart Selip, Principal Consulting LLC

Predictions: Tech Trends – part 1 – 2014

Jan 20, 2014   //   by admin   //   Blog  //  Comments Off

RFG Perspective: The global economic headwinds in 2014, which constrain IT budgets, will force IT executives to question certain basic assumptions and reexamine current and target technology solutions. There are new waves of next-generation technologies emerging and maturing that challenge the existing status quo and deserve IT executive attention. These technologies will improve business outcomes as well as spark innovation and drive down the cost of IT services and solutions. IT executives will have to work with business executives fund the next-generation technologies or find self-funding approaches to implementing them. IT executives will also have to provide the leadership needed for properly selecting and implementing cloud solutions or control will be assumed by business executives that usually lack all the appropriate skills for tackling outsourced IT solutions.

As mentioned in the RFG blog “IT and the Global Economy – 2014” the global economic environment may not be as strong as expected, thereby keeping IT budgets contained or shrinking. Therefore, IT executives will need to invest in next-generation technology to contain costs, minimize risks, improve resource utilization, and deliver the desired business outcomes. Below are a few key areas that RFG believes will be the major technology initiatives that will get the most attention.

Tech-driven Business Transformation

 

 

 

 

 

 

 

 

 

 

 

 

 

Source: RFG
Analytics – In 2014, look for analytics service and solution providers to boost usability of their products to encompass the average non-technical knowledge worker by moving closer to a “Google-like” search and inquiry experience in order to broaden opportunities and increase market share.

Big Data – Big Data integration services and solutions will grab the spotlight this year as organizations continue to ratchet up the volume, variety and velocity of data while seeking increased visibility, veracity and insight from their Big Data sources.

Cloud – Infrastructure as a Service (IaaS) will continue to dominate as a cloud solution over Platform as a Service (PaaS), although the latter is expected to gain momentum and market share. Nonetheless, Software as a Service (SaaS) will remain the cloud revenue leader with Salesforce.com the dominant player. Amazon Web Services will retain its overall leadership of IaaS/PaaS providers with Google, IBM, and Microsoft Azure holding onto the next set of slots. Rackspace and Oracle have a struggle ahead to gain market share, even as OpenStack (an open cloud architecture) gains momentum.

Cloud Service Providers (CSPs) – CSPs will face stiffer competition and pricing pressures as larger players acquire or build new capabilities and new, innovative open-source based solutions enter the new year with momentum as large, influential organizations look to build and share their own private and public cloud standards and APIs to lower infrastructure costs.

Consolidation – Data center consolidation will continue as users move applications and services to the cloud and standardized internal platforms that are intended to become cloud-like. Advancements in cloud offerings along with a diminished concern for security (more of a false hope than reality) will lead to more small and mid-sized businesses (SMBs) to shift processing to the cloud and operate fewer internal data center sites. Large enterprises will look to utilize clouds and colocation sites for development/test environments and handling spikes in capacity rather than open or grow in-house sites.

Containerization – Containerization (or modularization) is gaining acceptance by many leading-edge companies, like Google and Microsoft, but overall adoption is slow, as IT executives have yet to figure out how to deal with the technology. It is worth noting that the power usage effectiveness (PUE) of these solutions is excellent and has been known to be as low as 1.05 (whereas the average remains around 1.90).

Data center transformation – In order to achieve the levels of operational efficiency required, IT executives will have to increase their commitment to data center transformation. The productivity improvements will be achieved through the use of the shift from standalone vertical stack management to horizontal layer management, relationship management, and use of cloud technologies. One of the biggest effects of this shift is an actual reduction in operations headcount and reorientation of skills and talents to the new processes. IT executives should look for the transformation to be a minimum of a three year process. However, IT operations executives should not expect clear sailing as development shops will push back to prevent loss of control of their application environments.

3-D printing – 2014 will see the beginning of 3-D printing taking hold. Over time the use of 3-D printing will revolutionize the way companies produce materials and provide support services. Leading-edge companies will be the first to apply the technology this year and thereby gain a competitive advantage.

Energy efficiency/sustainability – While this is not new news in 2014, IT executives should be making it a part of other initiatives and a procurement requirement. RFG studies find that energy savings is just the tip of the iceberg (about 10 percent) that can be achieved when taking advantage of newer technologies. RFG studies show that in many cases the energy savings from removing hardware kept more than 40 months can usually pay for new better utilized equipment. Or, as an Intel study found, servers more than four years old accounted for four percent of the relative performance capacity yet consumed 60 percent of the power.

Hyperscale computing (HPC) – RFG views hyperscale computing as the next wave of computing that will replace the low end of the traditional x86 server market. The space is still in its infancy, with the primary players Advanced Micro Devices (AMD) SeaMicro solutions and Hewlett-Packard’s (HP’s) Moonshot server line. While penetration will be low in 2014, the value proposition for HPC solutions should be come evident.

Integrated systems – Integrated systems is a poorly defined computing technology that encompasses converged architecture, expert systems, and partially integrated systems as well as expert integrated systems. The major players in this space are Cisco, EMC, Dell, HP, IBM, and Oracle. While these systems have been on the market for more than a year now, revenues are still limited (depending upon whom one talks to, revenues may now exceed $1 billion globally) and adoption moving slowly. Truly integrated systems do result in productivity, time and cost savings and IT executives should be piloting them in 2014 to determine the role and value they can play in the corporate data centers.

Internet of things – More and more sensors are being employed and imbedded in appliances and other products, which will automate and improve life in IT and in the physical world. From an data center information management (DCIM), these sensors will enable IT operations staff to better monitor and manage system capacity and utilization. 2014 will see further advancements and inroads made in this area.

Linux/open source – The trend toward Linux and open source technologies continues with both picking up market share as IT shops find the costs are lower and they no longer need to be dependent upon vendor-provided support. Linux and other open technologies are now accepted because they provide agility, choice, and interoperability. According to a recent survey, a majority of users are now running Linux in their server environments, with more than 40 percent using Linux as either their primary server operating system or as one of their top server platforms. (Microsoft still has the advantage in the x86 platform space and will for some time to come.) OpenStack and the KVM hypervisor will continue to acquire supporting vendors and solutions as players look for solutions that do not lock them into proprietary offerings with limited ways forward. A Red Hat survey of 200 U.S. enterprise decision makers found that internal development of private cloud platforms has left organizations with numerous challenges such as application management, IT management, and resource management. To address these issues, organizations are moving or planning a move to OpenStack for private cloud initiatives, respondents claimed. Additionally, a recent OpenStack user survey indicated that 62 percent of OpenStack deployments use KVM as the hypervisor of choice.

Outsourcing – IT executives will be looking for more ways to improve outsourcing transparency and cost control in 2014. Outsourcers will have to step up to the SLA challenge (mentioned in the People and Process Trends 2014 blog) as well as provide better visibility into change management, incident management, projects, and project management. Correspondingly, with better visibility there will be a shift away from fixed priced engagements to ones with fixed and variable funding pools. Additionally, IT executives will be pushing for more contract flexibility, including payment terms. Application hosting displaced application development in 2013 as the most frequently outsourced function and 2014 will see the trend continue. The outsourcing of ecommerce operations and disaster recovery will be seen as having strong value propositions when compared to performing the work in-house. However, one cannot assume outsourcing is less expensive than handling the tasks internally.

Software defined x – Software defined networks, storage, data centers, etc. are all the latest hype. The trouble with all new technologies of this type is that the initial hype will not match reality. The new software defined market is quite immature and all the needed functionality will not be out in the early releases. Therefore, one can expect 2014 to be a year of disappointments for software defined solutions. However, over the next three to five years it will mature and start to become a usable reality.

Storage – Flash SSD et al – Storage is once again going through revolutionary changes. Flash, solid state drives (SSD), thin provisioning, tiering, and virtualization are advancing at a rapid pace as are the densities and power consumption curves. Tier one to tier four storage has been expanded to a number of different tier zero options – from storage inside the computer to PCIe cards to all flash solutions. 2014 will see more of the same with adoption of the newer technologies gaining speed. Most data centers are heavily loaded with hard disk drives (HDDs), a good number of which are short stroked. IT executives need to experiment with the myriad of storage choices and understand the different rationales for each. RFG expects the tighter integration of storage and servers to begin to take hold in a number of organizations as executives find the closer placement of the two will improve performance at a reasonable cost point.

RFG POV: 2014 will likely be a less daunting year for IT executives but keeping pace with technology advances will have to be part of any IT strategy if executives hope to achieve their goals for the year and keep their companies competitive. This will require IT to understand the rate of technology change and adapt a data center transformation plan that incorporates the new technologies at the appropriate pace. Additionally, IT executives will need to invest annually in new technologies to help contain costs, minimize risks, and improve resource utilization. IT executives should consider a turnover plan that upgrades (and transforms) a third of the data center each year. IT executives should collaborate with business and financial executives so that IT budgets and plans are integrated with the business and remain so throughout the year.

Predictions: People & Process Trends – 2014

Jan 20, 2014   //   by admin   //   Blog  //  Comments Off

RFG Perspective: The global economic headwinds in 2014, which constrain IT budgets, will force business and IT executives to more closely examine the people and process issues for productivity improvements. Externally IT executives will have to work with non-IT teams to improve and restructure processes to meet the new mobile/social environments that demand more collaborative and interactive real-time information. Simultaneously, IT executives will have to address the data quality and service level concerns that impact business outcomes, productivity and revenues so that there is more confidence in IT. Internally IT executives will need to increase their focus on automation, operations simplicity, and security so that IT can deliver more (again) at lower cost while better protecting the organization from cybercrimes.

As mentioned in the RFG blog “IT and the Global Economy – 2014” the global economic environment may not be as strong as expected, thereby keeping IT budgets contained or shrinking. Therefore, IT executives will need to invest in process improvements to help contain costs, enhance compliance, minimize risks, and improve resource utilization. Below are a few key areas that RFG believes will be the major people and process improvement initiatives that will get the most attention.

Automation/simplicity – Productivity in IT operations is a requirement for data center transformation. To achieve this IT executives will be pushing vendors to deliver more automation tools and easier to use products and services. Over the past decade some IT departments have been able to improve productivity by 10 times but many lag behind. In support of this, staff must switch from a vertical and highly technical model to a horizontal one in which they will manage services layers and relationships. New learning management techniques and systems will be needed to deliver content that can be grasped intuitively. Furthermore, the demand for increased IT services without commensurate budget increases will force IT executives to pursue productivity solutions to satisfy the business side of the house. Thus, automation software, virtualization techniques, and integrated solutions that simplify operations will be attractive initiatives for many IT executives.

Business Process Management (BPM) – BPM will gain more traction as companies continue to slice costs and demand more productivity from staff. Executives will look for BPM solutions that will automate redundant processes, enable them to get to the data they require, and/or allow them to respond to rapid-fire business changes within (and external to) their organizations. In healthcare in particular this will become a major thrust as the industry needs to move toward “pay for outcomes” and away from “pay for service” mentality.

Chargebacks – The movement to cloud computing is creating an environment that is conducive to implementation of chargebacks. The financial losers in this game will continue to resist but the momentum is turning against them. RFG expects more IT executives to be able to implement financially-meaningful chargebacks that enable business executives to better understand what the funds pay for and therefore better allocate IT resources, thereby optimizing expenditures. However, while chargebacks are gaining momentum across all industries, there is still a long way to go, especially for in-house clouds, systems and solutions.

Compliance – Thousands of new regulations took effect on January 1, as happens every year, making compliance even tougher. In 2014 the Affordable Care Act (aka Obamacare) kicked in for some companies but not others; compounding this, the U.S. President and his Health and Human Services (HHS) department keep issuing modifications to the law, which impact compliance and compliance reporting. IT executives will be hard pressed to keep up with compliance requirements globally and to improve users’ support for compliance.

Data quality – A recent study by RFG and Principal Consulting on the negative business outcomes of poor data quality finds a majority of users find data quality suspect. Most respondents believed inaccurate, unreliable, ambiguously defined, and disorganized data were the leading problems to be corrected. This will be partially addressed in 2014 by some users by looking at data confidence levels in association with the type and use of the data. IT must fix this problem if it is to regain trust. But it is not just an IT problem as it is costing companies dearly, in some cases more than 10 percent of revenues. Some IT executives will begin to capture the metrics required to build a business case to fix this while others will implement data quality solutions aimed at fixing select problems that have been determined to be troublesome.

Operations efficiency – This will be an overriding theme for many IT operations units. As has been the case over the years the factors driving improvement will be automation, standardization, and consolidation along with virtualization. However, for this to become mainstream, IT executives will need to know and monitor the key data center metrics, which for many will remain a challenge despite all the tools on the market. Look for minor advances in usage but major double-digit gains for those addressing operations efficiency.

Procurement – With the requirement for agility and the move towards cloud computing, more attention will be paid to the procurement process and supplier relationship management in 2014. Business and IT executives that emphasize a focus on these areas can reduce acquisition costs by double digits and improve flexibility and outcomes.

Security – The use of big data analytics and more collaboration will help improve real-time analysis but security issues will still be evident in 2014. RFG expects the fallout from the Target and probable Obamacare breaches will fuel the fears of identity theft exposures and impair ecommerce growth. Furthermore, electronic health and medical records in the cloud will require considerable security protections to minimize medical ID theft and payment of HIPAA and other penalties by SaaS and other providers. Not all providers will succeed and major breaches will occur.

Staffing – IT executives will do limited hiring again this year and will rely more on cloud services, consulting, and outsourcing services. There will be some shifts on suppliers and resource country-pool usage as advanced cloud offerings, geopolitical changes and economic factors drive IT executives to select alternative solutions.

Standardization –More and more IT executives recognize the need for standardization but advancement will require a continued executive push and involvement. In that this will become political, most new initiatives will be the result of the desire for cloud computing rather than internal leadership.

SLAs – Most IT executives and cloud providers have yet to provide the service levels businesses are demanding. More and better SLAs, especially for cloud platforms, are required. IT executives should push providers (and themselves) for SLAs covering availability, accountability, compliance, performance, resiliency, and security. Companies that address these issues will be the winners in 2014.

Watson – The IBM Watson cognitive system is still at the beginning of the acceptance curve but IBM is opening up Watson for developers to create own applications. 2014 might be a breakout year, starting a new wave of cognitive systems that will transform how people and organizations think, act, and operate.

RFG POV: 2014 will likely be a less daunting year for IT executives but people and process issues will have to be addressed if IT executives hope to achieve their goals for the year. This will require IT to integrate itself with the business and work collaboratively to enhance operations and innovate new, simpler approaches to doing business. Additionally, IT executives will need to invest in process improvements to help contain costs, enhance compliance, minimize risks, and improve resource utilization. IT executives should collaborate with business and financial executives so that IT budgets and plans are integrated with the business and remain so throughout the year.

Pages:1234567»