Cloud: The Wasted Decade?

‘Cloud Computing’ remains the darling of the technology industry. ‘Cloud’, we are told, will reduce operational cost while increase business agility. ‘Cloud’, we are promised, will re-shape the way we think about IT services.

 

The problem is, I don’t understand how theses transformational effects are achieved? Current 1st-generation ‘Compute Clouds’ result from the synthesis of two underlying technology enablers.
  1. The adoption of coarse grained Service Orientation (WS-* or REST), allowed the decoupling of interconnected coarse grained business services. This is good.
  2. The use of the virtual machine image as the deployment artifact, allowed applications to be deployed ‘unchanged’ into alien runtime environments.
And therein lies the problem.

 

How can applications be more agile, when their internal composition remains unchanged? When they remain monolithic entities with tightly coupled constituent components? Given this, how are spiralling application maintenance costs countered? Exactly how is environmental complexity addressed? Are not 1st-generation public Cloud Compute offerings simple approaches to resource outsourcing? From a resource perspective more flexible than previous approaches; but potentially much more restrictive and ultimately dangerous as business logic is rigidly locked to sets of third party Cloud middleware API’s? Meanwhile, are 1st-generation private ‘Compute Clouds’ not just the latest marketing tag-line for traditional enterprise software deployment products?

After a decade of ‘industry innovation’ –  I fear Cloud Computing and resource Virtualization have collectively made the world more complex, our business systems more brittle and ultimately more costly to support.

In 2004 Paremus started a project to address what we considered to be the fundamental problem: how to build adaptive and robust application and runtime environments. Despite the ongoing Cambrian explosion of programming languages, Paremus did not believe the answer lay with the adoption of new or old programming language; e.g. Scala or Erlang! We also felt that the fashion to ‘Virtualize’ was probably – at best – a distraction; a position which places Paremus at odds with the rest of the industry.

 

Having a predominance of Physicists; Paremus reached to an area of scientific research known as ‘Complex Adaptive Systems’, hoping this would provide the architectural guidance we needed. It did and we distilled the following principles:
  • Change: Whether you like it or not its going to happen. Change must not be viewed as an inconvenient edge case or ‘Black Swan’ event.
  • Necessary Complexity: Systems that do interesting things are by their nature Complex.
  • Loose Coupling: While Complexity isn’t the issue, rigid structural coupling within Complex systems is fatal. Constituent parts must be loosely coupled.
  • Stigmergy: Don’t manage the details, rather set the overall objective and enable the System’s constituent components to respond appropriately: i.e Think Globally, Act locally.
  • Structural Modularity: In complex systems structural modularity always exists in a natural hierarchy.
  • Self Describing: Modular systems comprised from self-describing units (each unit describes it’s requirements and capabilities) may be dynamically assembled.
  • Diversity: Structural modularity enables diversity, which enables agility and evolvability.
  • Accidental Complexity: Structural modularity enables accidental complexity to be reduced over time.

Creating the illusion of a static environment – via ‘virtualization’ – for rigid, inflexible monolithic applications is a mistake. An easily consumed pain killer that only masked the fundamental illness.

The body of research was both compelling and conclusive.

Modularity is the key-stone. Modularity enables systems to be dynamically assembled and, if required, re-assembled from self-describing components.

Of all the languages available, only one (Java) had a sufficiently sophisticated modularity framework (OSGi) to meet our requirements. A set of open industry specifications created by the OSGi Alliance; OSGi provided many of the characteristics we required:
  • Modules (known as OSGi Bundles) are self-describing entities whose capabilities and requirements are well defined.
  • A powerful resolver capability allows OSGi based applications to be dynamically assembled during development or runtime. This not only includes bundle dependencies, but also life-cycle; and looking ahead service and environmental resource dependencies.
  • OSGi provides a well defined and industry standard approach to life-cycle and dynamic configuration.
  • Finally, OSGi provides a powerful micro-Services layer: this completing a natural structural hierarchy ( Class  ➔ Packages ➔ Bundles ➔ microServices ➔ traditional SOA ).
For these reasons Paremus adopted OSGi as a cornerstone of our product strategy in 2005. The result of these ongoing efforts is the Service Fabric: the industries first distributed OSGi runtime which enables:
  • Business applications to be dynamically assembled from re-usable components.
  • The dynamic assembly and configuration of these applications with respect to the runtime environment within which they find themselves.
  • Middleware services dynamically installed as required. No Cloud / PaaS API lock-in
  • Dynamic reconfiguration of all runtime elements (business logic, middleware and the platform itself) in response to unforeseen cascading resource failure.

 

Why now?
Whether a Financial Market crash or a paradigm shift in the IT industry; it is relatively easy to forecast that a change is inevitable; but notoriously difficult to predict precisely when this will occur. It is inevitable that next generation cloud environmentsPublic or Private will support dynamically assembly highly modular applications: these constructed from re-usable industry standard components – meaning OSGi. It is also inevitable that these environments will themselves be OSGi from the ground up. OSGi remains the only open industry standard, and so will be highly influential for those organisations that want to leverage a decade of expertise and avoid propriety vendor lock-in. The OSGi Alliance have recognised this trend and are extending specification activities to include Cloud Computing and generic Modularity and Life-Cycle management.
But is that time now?

 

For some organizations the ‘Boiling Frog’ analogy is, unfortunately, apt. Such organisations will continue to endure increasing OPEX costs; avoiding the upfront expense and perceived risk of re-orientating their IT strategy to focus on optimizing maintainability, agility and resilience. For such organisations – Cloud and PaaS strategies will consist of virtual machine management and the same old application software stack. Such organisations will see little real business benefit.

 

Increased interest in DevOps tooling (e.g. Puppet, Chief) demonstrate that other organisations have started to look beyond virtual machine centric strategies. While such DevOps tools start to address the issue of dynamic deployment and configuration; this is still with respect to monolithic applications, and so in essence no different from the IBM Tivoli and HP management tools of the 1990’s.

 

Meanwhile organisations that view technology as an enabler rather than a cost;  are starting to seriously look at development and runtime modularity. In part as a way to realise continuous release and agile development. For Java centric organisations this is driving the adoption of OSGi. One simple metric I can provide is that in the last 12 months Paremus has provided OSGi training to Government, Financial Services and Web companies in the US, Europe and Asia. In most of these organizations the relationship between OSGi, Agile Development and Cloud was a major interest.

 

Despite all our economic woes, is the time now? Quite possibly!