Browsing articles tagged with " security"

IaaS and PaaS: Mature Enough for Financial Services Firms?

Aug 1, 2014   //   by admin   //   Reports  //  No Comments

RFG Perspective: Since 2008 financial services firms have been under constant pressure to grow revenues and contain costs, which is driving IT executives to invest in cloud computing. Business executives do not value infrastructure per se; consequently, there is a push towards cloud computing to drive down costs as well as to enhance agility. Moreover, IT executives want to be able to implement Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) cloud solutions that are vendor independent and first-of-a-kind implementations that provide portability. These solutions must also satisfy regulators, which, when it comes to compliance and security, is no simple task. However, since cloud computing standards are still nascent, with many conflicting APIs and standards in the works, IT executives must deal with an opportunity/cost challenge: wait for the technology to mature or implement cloud solutions now and risk the need for change.

Business Imperatives:

  • IaaS APIs, solutions and standards are still immature and in a state of flux. Even OpenStack, which claims to offer a robust infrastructure solution, is still initiating new projects (e.g., Murano and Solum) to address additional infrastructure requirements. Unfortunately, each IaaS software methodology is different, causing user software to become solution dependent and hinders elasticity and freedom of movement. Amazon AWS currently provides the best offering but gaps remain. IT executives must develop common standards, taxonomy, and use cases that can be used to push vendors into delivering solutions that meet industry requirements.
  • Which PaaS offering – and whether or not to use it – is heavily dependent upon application characteristics, workload portability, compliance, disaster recovery, and security requirements. The migration to PaaS means IT executives need to address DevOps and lifecycle management, which are not just technology challenges but also culture, people and process paradigm shifts. IT executives must re-evaluate their development lifecycle based on the new PaaS technologies, including defining their baseline requirements and policies for automation, continuous build, and version control.
  • Compliance and security are not the same thing but are frequently intermixed when discussed. Much has been done to address security but compliance gaps exist that must be addressed and the methodologies standardized so that buy-in across the industry can be obtained from the regulators. IT executives must work on a common set of standards that regulators will sign off on and vendors can agree to support.

 

RFG has held a number of cloud forums for IT executives and senior architects of the major financial institutions over the past few months in New York City and London. This research note summarizes the discussions, findings, and desired actions required for cloud computing to become an operating standard and penetrate the financial firms in a cohesive, coordinated way.

CTO Panels

CEOs, CTOs and other IaaS top vendor executives from Canonical, CloudScaling, IBM/SoftLayer, SolidFire, SunGard Availability Services, and SwiftStack provided their views on the status and trends for IaaS. They all agreed there is a lot of work to be done before there are common APIs and standards that users could use that would allow portability and facilitate agility and scalability. One vendor executive postulated that in five years it may be possible for hybrid clouds to reach the point where cloud environments can be seamless with common APIs and security policies. One challenge for user executives is that some applications are infrastructure-aware while others are not. For true independence and flexibility, this awareness must be eliminated or be resolvable through dependency or policy mappings.

Development teams, especially DevOps staff, should not need to know infrastructure, just be able to address orchestration and policies. In response to the question on compliance and security, it was agreed that the responsibility for policy enforcement, security and governance belongs in each component of the stack. A need for software-defined compliance was also addressed, with the consensus that it needs to be built in at the start, not after the fact. IT executives were advised to contemplate two kinds of clouds: a virtualization cloud for legacy applications with the aim of improving cost efficiencies; and an innovation cloud designed to help developers get new applications to market faster. Cloud architects must be able to stitch these clouds together.

A second panel consisting of CEOs, CTOs and other PaaS vendor executives from ActiveState, Citrix, GigaSpaces, Mirantis, and MuleSoft offered their opinions on PaaS status and trends. All agreed that PaaS and IaaS layers are not blurring but the application and platform layers are and it will get even more blurred as vendors add more layers and build more higher order services. Nonetheless, all PaaS frameworks should run on any IaaS layer. The issue of DevOps arose again with executives pointing out that DevOps is not just a technology issue; it must also address policies (like security), processes, and cultural change. Developers need to rethink their roles and focus more on orchestration of services than on purely writing code.

Vendors conceded IaaS and PaaS solutions are still immature and suggested IT executives view the use of clouds as an opportunity/cost analysis problem. IT executives and their firms can wait until the technology matures or can invest now, shoot for first mover advantage, and risk the need for change when standards emerge that are not consistent with their implementations. The rewrite risk was postulated to be less expensive than the risk from market losses to competitors.

The panel discussed the requirement for common APIs and workload affinity and portability. While there was agreement on the need for common APIs, there was disagreement on the right level of abstraction for the APIs. All agreed workload affinity will apply to PaaS platforms, which means IT executives will need to determine which workloads apply to what PaaS offerings before attempting to migrate workloads. Successful PaaS solutions will allow for application portability on- or off-premise. The movement towards use of composable elements will enable this capability. The challenge will be the mapping of application services across divisions or organizations, as even file movements look different across organizations. The panel voiced support for software defined solutions, including software defined operators.

IaaS

In the IaaS track IT executives and architects expressed that there is no winning solution out there yet. Amazon AWS, Docker, KVM, OpenStack, Rackspace, Ubuntu, VMware, and Xen are amongst the cloud solutions in use. An AWS architect voiced the opinion that more banks use AWS than other solutions because the company works more closely with customers to meet their unique banking requirements than the others. For example, users expressed Amazon got it right when it bolted down certain components, like the hypervisor, while showing more flexibility elsewhere. However, it was clear from forum discussions that AWS's early dominance is no guarantee that it will remain the 800 pound cloud gorilla.

One IT executive expressed use of IaaS could solve the hygiene and maintenance issues while simultaneously driving down the cost of infrastructure maintenance and support. IT executives could view IaaS solutions as disposable – i.e., bugs are not fixed and upgrades not applied but platforms are discarded and new ones provisioned. Smartphones use this concept today and it could be a transformative approach to keeping infrastructure software current.

Operations movement to the cloud represents a paradigm shift for the development cycle and developers. Business executives are not enthused with paying for infrastructure costs, as it impairs margins and does not drive revenue. Thus, it behooves organizations to standardize on cloud platforms that can provide agility, portability, scalability, security and cost containment rather than have each application locked into its own infrastructure. This is a 180 degree shift in how the process is done today. IT operations executives need to convince senior management and development executives to change the development culture. However, all agree that this is most likely an 80/20 rule – 80 percent of the time a few IaaS platforms apply and 20 percent of the time uniquely modified platforms may be needed for the enterprise to differentiate itself to have a competitive advantage and make money.

Lastly, there was consensus amongst users and vendors that there is a need for, at minimum, de facto standards and a common taxonomy. The areas to cover are those currently found in the AWS implementation plus audit, federation, orchestration, and software distribution. The group wants to move forward with a focus on the areas of audit and compliance first, using use cases as the baseline for developing requirements.

PaaS

It became clear early on in the information exchange that PaaS means different things to different people – even within a company. There are PaaS offerings for analytics, databases, and disaster recovery as well as online transaction processing, for example, which can be self-service, pay-as-you-go, and on demand. The platforms may have different requirements for availability, compliance, orchestration, scalability, security, and support. Some PaaS solutions are designed for DevOps while others are architected for legacy processes and applications.

The executives chose to focus on business- and mission-critical applications and the solutions employed, such as AWS Cloud Formation, Pivotal Software's Cloud Foundry, GitHub, Heat, Jenkins, Murano, OpenShift, Puppet, Solum, and Trove. As can be noted, the discussion went beyond just the PaaS platform to application life cycle management. One conclusion was that IT operations executives should keep in mind that when talking to their development counterparts, the development requirements lists are more flexible than most claim or developers would not be able to use AWS. This bodes well for moving to standardized cloud platforms and away from development defining systems rather than requirements. However, in the near term, application dependencies will be a major problem that users and vendors will have to solve.

One of the executives warned that PaaS has a long way to mature and that one component not currently present but desirable will be graphics/visualization. He expects visualization tools to simplify the creation of workflow diagrams and the underlying processes. Since this parallels what has occurred in other areas of process automation, RFG believes it is highly likely that these types of tools will materialize over the long term.

When it became apparent that PaaS is more about the application life cycle than the platform itself, DevOps and life cycle management became the prime topic of discussion. Executives envisioned a PaaS solution that supported the development process from the PC development platform to production to future releases. However, implementation of standardized platforms and DevOps implies a transformational change in the development process. This does not imply eliminating choice; but choices should be limited without stifling innovation. Developers will need to be taught how to rapidly move applications through the development cycle through the use of automated tools. There are platform tools that can watch a repository, see a commit, check it out, run Jenkins on it, go through the quality assurance cycle and go live after it gets a "green light." This automated process can shrink the development time from months to minutes.

There was a consensus for a need to re-define the development lifecycle based on the new PaaS technologies, including delineating the baseline requirements and policies for automation, continuous build, and version control.

Compliance and Security

Initially attendees did not think there was much value in discussing compliance and security for cloud computing. But comments from an IBM security CTO got them rethinking their positions. She stated IBM is completely rewriting its internal security policies to accommodate cloud computing. Everyone needs to start over and rethink security architecture and controls, especially secondary and tertiary levels. The risks have changed and are changing more rapidly as time goes on. Therefore, auditable controls and security must be built in upfront not as an afterthought. This needs to be done on a global basis to contain costs. All units in an enterprise need to be on the same page at each point in time and maintain the trajectory of heading in the same direction for it to be successful.

Compliance is a different matter. While everyone is addressing security in some measure, compliance lacks common global standards for infrastructure, platforms and applications. FFIEC rules and ISO 22002 standards must be met along with NIST, FedRAMP, and non-U.S. standards in the countries in which the financial institutions operate. It is possible that there is 80 percent overlap but the standards must be mapped and addressed. One of the financial services firms has already mapped the FFIEC rules back to ISO 22002 but the rest has not been addressed. Once the appropriate compliance requirements are mapped, commonalities determined and gaps addressed, users can go to the regulators to request approval and can ask cloud providers to include the de facto standards in their offerings. The group agreed that a working group should consolidate current standards and guidelines, creating a document that can be agreed upon and taken to regulators for acceptance.

Additionally, IT executives need to ensure cloud service provider (CSP) contracts have provisions for certification and/or responsibility for controls. CSPs must take responsibility from a regulatory view, if they expect financial firms to be comfortable using their services. The contracts must also clearly call out the roles and responsibilities of both parties and the process for hand offs.

Common Architecture

The purpose of a common architecture is to enable application portability across platforms within an enterprise as well as for bursting out to private or public clouds to handle peak loads. This is not to suggest all cloud platforms support all applications. But for those platforms where there is workload affinity for a certain application set, the ability to move from one instance to another should be a simple task. The goal should be "one click" portability that gives almost instantaneous movement to another instance anywhere in the cloud or expansion that adds instances and allows for hybrid cloud environments.

The IT executives and architects concurred that the common architecture vision is a concept that may become a reality in the long-term but will require mature standards first. A discussion arose on the topic of whether Amazon AWS APIs and standards could be used as a baseline. However, Amazon pointed out that the APIs are copyrighted and approval would be required first. Amazon will look into the possibility of getting approval. In the meantime, the users agreed that the financial services firms will not wait for the availability of a common architecture but invest now to meet their business needs. As a next step, the group agreed to start developing the commonalities for the architecture.

Summary

The IaaS and security groups agreed to a joint effort to review the many overlapping Compliance standards for commonalities and reduce that list to a bare-essential set of requirements. Essential security elements will be added to the list. The PaaS group wants to use a declarative approach to indicating all the resources, and policies amongst the resources and for the PaaS platform. From a developer life cycle perspective, the group wants to include, in a declarative template, the various components of the application development life cycle. Amongst the items to address are the release levels including continuous builds and testing.

IT executives suggested including the OCC in the next session to provide regulators with guidance on financial institutions' direction for APIs, de facto standards, reference architectures and frameworks, and to perhaps influence the regulators' direction accordingly. Users also asked for the adoption of a mechanism for keeping track of workgroup progress and communicating that to other Forum members. Suggestions include tools used by other standards organizations and working groups.

In sum, the comments and conclusions of the IT executives and architects in the cloud forums are indicative of the challenges, requirements and directions of the top U.S. and global financial institutions. But the executives believe the time and resources spent in the development of requirements and standards will be worth it.

 

RFG POV: Financial Services firms are committed to moving to cloud platforms, both on-premise and in private and public clouds. Since they will not be waiting for the IaaS and PaaS offerings to mature, there is a strong commitment to work together to create baseline APIs, requirements and standards that can be frameworks for the financial institutions, regulatory agencies, and cloud vendors. These frameworks should enable the firms to reduce costs, drive cost efficiencies, achieve a level of vendor independence, and simplify compliance with regulatory requirements. Moreover, the frameworks should be applicable to other industries and enable any large enterprise to more easily and rapidly take advantage of cloud computing. IT executives should approach their move to the cloud strategically by defining their policies, frameworks, guidelines, requirements and standards, and performing opportunity/cost analyses first before committing to one or more target cloud architectures and implementations.

Predictions: People & Process Trends – 2014

Jan 20, 2014   //   by admin   //   Blog  //  No Comments

RFG Perspective: The global economic headwinds in 2014, which constrain IT budgets, will force business and IT executives to more closely examine the people and process issues for productivity improvements. Externally IT executives will have to work with non-IT teams to improve and restructure processes to meet the new mobile/social environments that demand more collaborative and interactive real-time information. Simultaneously, IT executives will have to address the data quality and service level concerns that impact business outcomes, productivity and revenues so that there is more confidence in IT. Internally IT executives will need to increase their focus on automation, operations simplicity, and security so that IT can deliver more (again) at lower cost while better protecting the organization from cybercrimes.

As mentioned in the RFG blog "IT and the Global Economy – 2014" the global economic environment may not be as strong as expected, thereby keeping IT budgets contained or shrinking. Therefore, IT executives will need to invest in process improvements to help contain costs, enhance compliance, minimize risks, and improve resource utilization. Below are a few key areas that RFG believes will be the major people and process improvement initiatives that will get the most attention.

Automation/simplicity – Productivity in IT operations is a requirement for data center transformation. To achieve this IT executives will be pushing vendors to deliver more automation tools and easier to use products and services. Over the past decade some IT departments have been able to improve productivity by 10 times but many lag behind. In support of this, staff must switch from a vertical and highly technical model to a horizontal one in which they will manage services layers and relationships. New learning management techniques and systems will be needed to deliver content that can be grasped intuitively. Furthermore, the demand for increased IT services without commensurate budget increases will force IT executives to pursue productivity solutions to satisfy the business side of the house. Thus, automation software, virtualization techniques, and integrated solutions that simplify operations will be attractive initiatives for many IT executives.

Business Process Management (BPM) – BPM will gain more traction as companies continue to slice costs and demand more productivity from staff. Executives will look for BPM solutions that will automate redundant processes, enable them to get to the data they require, and/or allow them to respond to rapid-fire business changes within (and external to) their organizations. In healthcare in particular this will become a major thrust as the industry needs to move toward "pay for outcomes" and away from "pay for service" mentality.

Chargebacks – The movement to cloud computing is creating an environment that is conducive to implementation of chargebacks. The financial losers in this game will continue to resist but the momentum is turning against them. RFG expects more IT executives to be able to implement financially-meaningful chargebacks that enable business executives to better understand what the funds pay for and therefore better allocate IT resources, thereby optimizing expenditures. However, while chargebacks are gaining momentum across all industries, there is still a long way to go, especially for in-house clouds, systems and solutions.

Compliance – Thousands of new regulations took effect on January 1, as happens every year, making compliance even tougher. In 2014 the Affordable Care Act (aka Obamacare) kicked in for some companies but not others; compounding this, the U.S. President and his Health and Human Services (HHS) department keep issuing modifications to the law, which impact compliance and compliance reporting. IT executives will be hard pressed to keep up with compliance requirements globally and to improve users' support for compliance.

Data quality – A recent study by RFG and Principal Consulting on the negative business outcomes of poor data quality finds a majority of users find data quality suspect. Most respondents believed inaccurate, unreliable, ambiguously defined, and disorganized data were the leading problems to be corrected. This will be partially addressed in 2014 by some users by looking at data confidence levels in association with the type and use of the data. IT must fix this problem if it is to regain trust. But it is not just an IT problem as it is costing companies dearly, in some cases more than 10 percent of revenues. Some IT executives will begin to capture the metrics required to build a business case to fix this while others will implement data quality solutions aimed at fixing select problems that have been determined to be troublesome.

Operations efficiency – This will be an overriding theme for many IT operations units. As has been the case over the years the factors driving improvement will be automation, standardization, and consolidation along with virtualization. However, for this to become mainstream, IT executives will need to know and monitor the key data center metrics, which for many will remain a challenge despite all the tools on the market. Look for minor advances in usage but major double-digit gains for those addressing operations efficiency.

Procurement – With the requirement for agility and the move towards cloud computing, more attention will be paid to the procurement process and supplier relationship management in 2014. Business and IT executives that emphasize a focus on these areas can reduce acquisition costs by double digits and improve flexibility and outcomes.

Security – The use of big data analytics and more collaboration will help improve real-time analysis but security issues will still be evident in 2014. RFG expects the fallout from the Target and probable Obamacare breaches will fuel the fears of identity theft exposures and impair ecommerce growth. Furthermore, electronic health and medical records in the cloud will require considerable security protections to minimize medical ID theft and payment of HIPAA and other penalties by SaaS and other providers. Not all providers will succeed and major breaches will occur.

Staffing – IT executives will do limited hiring again this year and will rely more on cloud services, consulting, and outsourcing services. There will be some shifts on suppliers and resource country-pool usage as advanced cloud offerings, geopolitical changes and economic factors drive IT executives to select alternative solutions.

Standardization –More and more IT executives recognize the need for standardization but advancement will require a continued executive push and involvement. In that this will become political, most new initiatives will be the result of the desire for cloud computing rather than internal leadership.

SLAs – Most IT executives and cloud providers have yet to provide the service levels businesses are demanding. More and better SLAs, especially for cloud platforms, are required. IT executives should push providers (and themselves) for SLAs covering availability, accountability, compliance, performance, resiliency, and security. Companies that address these issues will be the winners in 2014.

Watson – The IBM Watson cognitive system is still at the beginning of the acceptance curve but IBM is opening up Watson for developers to create own applications. 2014 might be a breakout year, starting a new wave of cognitive systems that will transform how people and organizations think, act, and operate.

RFG POV: 2014 will likely be a less daunting year for IT executives but people and process issues will have to be addressed if IT executives hope to achieve their goals for the year. This will require IT to integrate itself with the business and work collaboratively to enhance operations and innovate new, simpler approaches to doing business. Additionally, IT executives will need to invest in process improvements to help contain costs, enhance compliance, minimize risks, and improve resource utilization. IT executives should collaborate with business and financial executives so that IT budgets and plans are integrated with the business and remain so throughout the year.

The Little Mainframe That Could

Aug 23, 2013   //   by admin   //   Blog  //  No Comments

RFG Perspective: The just-launched IBM Corp. zEnterprise BC12 servers are very competitive mainframes that should be attractive to organizations with revenues in excess of, or expanding to, $100 million. The entry level mainframes that replace last generation's z114 series can consolidate up to 40 virtual servers per core or up to 520 in a single footprint for as low as $1.00 per day per virtual server. RFG projects that the zBC12 ecosystem could be up to 50 percent less expensive than comparable all-x86 distributed environments. IT executives running Java or Linux applications or eager to eliminate duplicative shared-nothing databases should evaluate the zBC12 ecosystem to see if the platform can best meet business and technology requirements.

Contrary to public opinion (and that of competitive hardware vendors) the mainframe is not dead, nor is it dying. In the last 12 months the zEnterprise mainframe servers have extended growth performance for the tenth straight year, according to IBM. The latest MIPS (millions of instructions per second) installed base jumped 23 percent year-over-year and revenues jumped 10 percent. There have been 210 new accounts since the zEnterprise launch as well as 195 zBX units shipped. More than 25 percent of all MIPS are IFLs, specialty engines that run Linux only, and three-fourths of the top 100 zEnterprise customers have IFLs installed. The ISV base continues to grow with more than 7,400 applications available and more than 1,000 schools in 67 countries participate in the IBM Academic Initiative for System z. This is not a dying platform but one gaining ground in an overall stagnant server market. The new zBC12 will enable the mainframe platform to grow further and expand into lower-end markets.

zBC12 Basics

The zBC12 is faster than the z114, using a 4.2GHz 64-bit processor and has twice the maximum memory of the z114 at 498 GB. The zBC12 can be leased starting at $1,965 a month, depending upon the enterprise's credit worthiness, or it can be purchased starting at $75,000. RFG has done multiple TCO studies on the zEnterprise Enterprise Class server ecosystems and estimates the zBC12 ecosystem could be 50 percent less expensive than x86 distributive environments having the equivalent computing power.

On the analytics side, the zBC12 offers the IBM DB2 Analytics Accelerator that IBM says offers significantly faster performance for workloads such as Cognos and SPSS analytics. The zBC12 also attaches to Netezza and PureData for Analytics appliances for integrated, real-time operational analytics.

Cloud, Linux and Other Plays

On the cloud front, IBM is a key contributor to OpenStack, an open and scalable operating system for private and public clouds. OpenStack was initially developed by RackSpace Holdings and currently has a community of more than 190 companies supporting it including Dell Inc., Hewlett-Packard Co. (HP), IBM, and Red Hat Inc. IBM has also added its z/VM Hypervisor and z/VM Operating System APIs for use with OpenStack. By using this framework, public cloud service providers and organizations building out their own private clouds can benefit from zEnterprise advantages such as availability, reliability, scalability, security and costs.

As stated above, Linux now accounts for more than 25 percent of all System z workloads, which can run on zEnterprise systems with IFLs or on a Linux-only system. The standalone Enterprise Linux Server (ELS) uses the z/VM virtualization hypervisor and has available more than 3,000 tested Linux applications. IBM provides a number of specially-priced zEnterprise Solution Editions, including the Cloud-Ready for Linux on System z, which turns the mainframe into an Infrastructure-as-a-Service (IaaS) platform. Additionally, the zBC12 comes with EAL5+ security, which satisfies the high levels of protection on a commercial server.

The zBC12 is an ideal candidate for mid-market companies to act as the primary data server platform. RFG believes organizations will save up to 50 percent of their IT ecosystem costs if the mainframe handles all the data serving, since it provides a shared-everything data storage environment. Distributed computing platforms are designed for shared-nothing data storage, which means duplicate databases must be created for each application running in parallel. Thus, if there are a dozen applications using the customer database, then there are 12 copies of the customer file in use simultaneously. These must be kept in sync as best as possible. The costs for all the additional storage and administration can make the distributed solution more costly than the zBC12 for companies with revenues in excess of $100 million. IT executives can architect the systems as ELS only or with a mainframe central processor, IFLs and zBX for Microsoft Corp. Windows applications, depending on the configuration needs.

Summary

The mainframe myths have misled business and IT executives into believing mainframes are expensive and outdated, and led to higher data center costs and sub-optimization for mid-market and larger companies. With the new zEnterprise BC12 IBM has an effective server platform that can counter the myths and provide IT executives with a solution that will help companies contain costs, become more competitive, and assist with a transformation to a consumption-based usage model.

RFG POV: Each server platform is architected to execute certain types of application workloads well. The BC12 is an excellent server solution for applications requiring high availability, reliability, resiliency, scalability, and security. The mainframe handles mixed workloads well, is best of breed at data serving, and can excel in cross-platform management and performance using its IFLs and zBX processors. IT executives should consider the BC12 when evaluating platform choices for analytics, data serving, packaged enterprise applications such as CRM and ERP systems, and Web serving environments.