Zen and the Art of Cloud Computing

Change is inevitable. Change is constant.

Benjamin Disraeli

I used the enclosed “Cloud Computing” slide set to summarize the Paremus position with respect to Cloud at the OSGi Cloud workshop (EclipseCon 2010 – organised by Peter Kriens).

The slides attempted to communicate the following fundamentals:

  • The Inevitability of Change
  • The strong fundamental relationship between Agility and Robustness
  • The need to Simplify through Abstraction

The implications being that:

  • Clouds’ need to be reactive runtimes; able to dynamically assemble and maintain themselves, as well as the composite business services which run upon them.
  • Modularity through OSGi is the key enabler.
View more presentations from mfrancis.

To explore these concepts further I will be publishing a series of short blog articles using the ‘Zen and the Art of Cloud Computing’ theme. Each article concerned with a specific idea, and how this is realized within the context of the Paremus Service Fabric.

Stay tuned….

Happy New Year

2009 was an interesting year for Paremus. Despite the bitter economic climate, OSGi finally captured the imagination of the software industry: with this, in conjunction with the intensifying ‘Cloud Computing‘ drum-beat, came an increased appreciation of capabilities and benefits brought by the Paremus Service Fabric.

In the closing months of 2009, Paremus released Nimble as a stand-alone product. A state-of-the-art dependency resolution engine, Β Nimble’s mantra ‘Making Modularity Manageable’, struck a chord with many of you; and I’d like to thank you all for the extremely positive reception Nimble received.

For those interested in private, public and hybrid ‘Clouds’, the year also closed with a interesting series of Christmas posts (or lectures πŸ™‚ ) by Chris Swan charting his successful attempt to create a hybrid cloud based ‘platform as a service’ by combining the Paremus Service Fabric with Amazon EC2.

So what can we expected in 2010? From a software industry perspective, be prepared for some fundamental shifts as the industry really starts to grapple with modularisation, cloud computing and when used in combination, what they really mean! As usual Kirk Knoernschild captures the moment in his latest post ‘A New Year’s Declaration

Whilst not wanting to give the game away, I can say that Paremus will be making a number of interesting announcements over the coming months. Those of you who are interested OSGi should keep a close eye on Nimble via the Nimble blog. Meanwhile, those interested in low latency computing, private Cloud Computing and innovative implementations of OSGi EEG standards work, should keep a watching brief on the Paremus Service Fabric.

Enough said.

Announced in 2005, the Newton project was, by several years, the industries first distributed SCA / OSGi distributed runtime platform! Since then, Newton has indirectly influenced industry standards, and many of our competitors roadmaps. However, since early 2008, our commercial Service Fabric has rapidly evolved beyond the initial concepts encapsulated within Newton. Given this, we start 2010 by announcing that that Newton will be archived and the CodeCauldron open source community closed.

In due course, a number of new OSS projects will be announced on the new Paremus community site. The CodeCauldron OSS experience and these future initiatives will be the subject of subsequent posts.

In the meantime, I wish Paremus customers, partners and friends all the very best for 2010.

Richard

Cloud Computing – finally, FINALLY, someone gets it!

I’ve been really busy these last few months. So not had the time or inclination to post. Yet after reading Simon Crosby’s recent article
Whither the Venerable OS? I felt compelled to put pen to paper – or rather should that be fingers to keyboard.

Whilst a good read, the magic paragraph for me appears towards the end of Crosby’s article.

If IaaS clouds are the new server vendors, then the OS meets the server when the user runs an app in the cloud. That radically changes the business model for the OS vendor. But is the OS then simply a runtime for an application? The OS vendors would rightly quibble with that. The OS is today the locus of innovation in applications, and its rich primitives for the development and support of multi-tiered apps that span multiple servers on virtualized infrastructure is an indication of the future of the OS itself: Just as the abstraction of hardware has extended over multiple servers, so will the abstraction of the application support and runtime layers. Unlike my friends at VMware who view virtualization as the “New OS” I view the New OS as the trend toward an app isolation abstraction that is independent of hardware: the emergence of Platform as a Service.

Yes! Finally someone understands!

This is IMO exactly right, and the motivation behind the Paremus Service Fabric a path we started down in 2004!

OK, so we were a bit ahead of the industry innovation curve.

Anyway, related commentary on the internet suggests that Simon’s article validates VMwares acquisition of SpringSource. Well, I’d actually argue quite the opposite. Normal operating systems have been designed to run upon a fixed, unchanging resource landscapes; in contrast a “Cloud” operating system must be able to adapt, and must allow hosted applications to adjust, to a continuously churning set of IaaS resources. Quite simply, SpringSource do not have these capabilities in any shape or form.

However, I would disagree with the last point in Simon’s article. Having reviewed Microsoft’s Azure architecture, it seems to me no different from the plethora of Cloud/distributed ISV solutions. Microsoft’s Azure platform has a management/provisioning framework that fundamentally appears to be based on a Paxos like consensus algorithm; this no different from a variety of ISV’s that are using Apache Zookeeper as a registry / repository: All connection oriented architectures, all suffering with the same old problems!

Whilst such solutions are robust in a static environment, such approaches fail to account for the realities of complex system failures. Specifically, rather than isolated un-correlated failure events, failures in complex systems tend to be correlated and cascade! Cloud operating systems must address this fundamental reality and Microsoft are no further ahead than VMware or Google; indeed the race hasn’t even started yet! And the best defence against cascading failure in complex systems? Well that would be dynamic re-assembly driven by ‘eventual’ structural and data consistency.

How Nimble is your OSGi runtime

Hands up all of you managing OSGi dependencies via an editable list of bundles. Easy isn’t it! It just works right!?

Well actually – it ‘just works‘ for a single application running in a small number of containers. From an enterprise perspective you are unintentionally contributing to an impending complexity meltdown; an explosion of dependency and configuration management issues. And if you are unfortunate enough to end up supporting your own composite creations, you may well end up envying the fate of Prometheus and rueing the day you learnt to code.

Possible harsh? But I’m not alone voicing this concern!

In his recent article Reuse: Is the Dream Dead?, Kirk Knoernschild continues his efforts to educate the industry on the tensions between code ‘re-use‘ and ‘simplicity of use‘. Kirk argues that as you increase potential re-use via lightweight fine-grained components, the complexity of dependencies and necessary environmental configurations corresponding increase, so making these same components harder to use.

A simple concept, yet if unaddressed, an issue that will make your life as an enterprise developer increasing uncomfortable and help edge OSGi closer to that seemingly inevitable ‘trough of disillusionment‘.

Yet, from a development perspective the issue of dependency management is well understood.

sigilWhilst initially found wanting, a number of projects now exist to address this; including the SIGIL eclipse plug-in which Paremus recently contributed to the Apache Felix project, (SIGIL leveraging Peter Krien’s BND tool).

In contrast, the issue of dependency management in Production is less immediately obvious, its impact more profound and generally ignored.

  • Will aspects of the runtime environment affect the runtime dependencies within the application?
  • Will applications be isolated from each other, or might they run within the same JVM?
  • How are the released artifacts subsequently managed in the production environment with respect to ongoing bundle dependency and version management?

Echoing Kirk’s concerns, Robert Dunne started his presentation at OSGi DevCon Europe with the observation that; ‘whilst modularity was good, its benefits are often undermined by dependency and configuration complexity‘. The subject of Robert’s presentation? The Paremus Nimble Resolver, which is our response to the concerns posed by Kirk.

Nimble is a high performance runtime dependency resolver. To deploy a composite application to a Nimble enabled runtime (i.e. the Paremus Service Fabric) one specifies:

  • The root component of the artifact.
  • And a set of associated policies and constraints.

Nimble then does the rest.

Presented with the ‘root‘, Nimble dynamical constructs the target composite; ensuring that the structural dependencies are resolved in a manner consistent with both organizational policies and the runtime environment within which it finds itself.

Nimble’s OSGi capabilities include:

  • Fragment attachment policies.
  • Optional import policies.
  • Import version range narrowing.
  • The ability to resolve dependencies on extender bundles (DS, ‘classic’ Spring, Spring DM, iPOJO).

With Nimble policies allowing:

  • The configuration of selected extensions.
  • Flexible constraint requirement -> capability matching.
  • The ability to configure optional dependency resolution behaviors.

Not just OSGi, Nimble is a generic artifact resolver with a plug-able architecture. Any artifact type may be supported, with support currently available for:

  • OSGi Bundles
  • POJO’s, ‘classic’ Spring & Spring DM
  • WAR
  • Configurations.

A Nimble enable runtime quite literally dynamically assembles all required runtime application and infrastructure service dependencies around the deployed business components! Specify a WAR artifact and Nimble will instantiate the appropriate Servlet engine dictated by runtime policy attached to the WAR; i.e. Tomcat or Jetty Sir? Specify a ‘Configuration‘, and Nimble responds by installing the target of the configuration, and of-course its dependencies.

Nimble not only directly addresses Kirk’s concerns, but goes on to radically transforms our understanding of the responsibilities and capabilities of next generation composite aware Service Platforms. But most importantly, Nimble was created to enable effect re-use whilst making life simpler for you and the organizations you work for.

A Little Early

Whilst recently writing up a white paper, I idly spent sometime looking through my usual archive – the Internet (anything to avoid writing). :-/

When did we (Paremus) first announce distributed OSGi again? Answer, not 2009 as one believe if you listened to all the IT vendor noise about RFC119 – but in December 16th 2005.

OK – we were a little early πŸ™‚

This press release even had a quote from Jon Bostrom. Jon, six years early in 1998 actually visited Salomon Brothers UK to provide a Jini train course too, what turned out to be, a proto-Paremus team.

This morning I was alerted to a blog concerning Jini and OSGi which I dually half-read, then responded. Then realized that the blogger had actually reference a short 5 minute talk I gave at the Brussels JCM 10 Jini event September 2006. As the message from this presentation had been ignored by the community since that point – I has somewhat surprised / pleased to see it referenced.

My message at the time was simple and quite unpopular…

To survive and flourish Jini must embrace OSGi

The other thing that sprang to mind was Jim Waldo’s presentation at the same conference. Unlike mine, this widely report with great enthusiasm; I really don’t mind Jim:)

The interesting thing was – at least to my mind – one of Jim’s most profound comments seemed to be missed by most.

Program v.s. Deploy – we’ll put the management in later

This struck particular resonance with the Paremus engineering team – as our dynamic target state provisioning sub-system for Infiniflow had been released earlier that very year. This leveraging those very ideas!

Its now 2009 – we have the industry has defined the relevant required standards for distributed OSGi based frameworks. Now the industry is wondering how to develop, deploy and manage runtimes that consist of 1000’s of dynamical deployed bundles running on a Cloud of Compute resource.

No problem! Paremus have been doing that for half a decade πŸ˜‰

Conclusions? Nothing profound. Perhaps the slow pace of the IT industry? But isn’t the Internet a great communal memory!

Impaled on the Horns of an OPEX Dilemma

Impaled on the Horns of an OPEX Dilemma

The finance industry are clearly having a tough time at present. As losses mount, CEO’s & CIO’s are increasingly scrutinizing the costs of doing business. One interesting metric, the cost of running a single production application; $1,000,000 per annum! Take the many thousands of applications typically used in a large finance house, and operational costs rapidly exceeds the billion dollar per annum mark.
Why is this?

Surely, over the last few years the Finance industry has increasingly driven down the price of enterprise software, to the point that an application server may now be charged at a few hundred dollars per node. Likewise, basic networking, server and storage are cheaper than at any time in the past.

The problem isn’t the cost of the raw materials, rather the fact that these organizations have built increasingly complex environments which must be maintained by an army of IT staff.

I’m probably not far off the mark suggesting 80% of the annual cost for each application relates to support and development staff that are required to maintain and keep the application running.

And the choices available to the CxO?
  • Use Cheaper Resource: Ship operations out to China, India or Mexico! While on-paper attractive as a quick fix; there is a catch. Wages tend to normalize as time progress, with the cost of initially cost effective workforces rising to the point that the Market will bear. Indeed – it has a name; “Free Market Dynamics”. Hence within a reasonable timeframe (~5 yrs) – the cost advantage will evaporated; meanwhile the company is still left with a complex manually intensive operational environment. Traditional – third party outsourcing – of which there are several failed examples exist in the late 1999 / early 2000 period – fall into this category. This approach does nothing to address the the root cause of the spiraling operational costs – complexity! In short – a strategy guaranteed to fail in the medium / long term.
  • Reduce the Number of Applications: If the cost relates to the number of applications – simply forcing down the number of applications in use will initially reduce OPEX costs. Whilst a reasonable strategy for some, the Financial Service industry is highly adaptive and constantly needing the evolve applications and services. Hence, a “no new” applications policy merely results in bolt-ons of additional functionality to existing systems – increasing complexity and associated costs of the remaining applications.
  • Use Technology to Reduce headcount: The IT industry have collectively failed to provide real solutions to this! Despite a flood of Automated Run-Book, Monitoring, Configuration Management, Package / OS Deployment and Virtualization Management products, humans are still very much still “in-the-loop”; directly concerned with all aspects of every software service in the runtime environment. Environments are more complex than ever!

So what is stopping the IT industry developing the right products? Simply, industry continues to fail to realize that automation of the existing is not sufficient. A fundamental/radical change in perspective with respect to how distributed systems are built and maintained is needed to address the Complexity Crisis organizations now face. Funnily enough, this is whatInfiniflow has been developed to address.

And the users of the technology?

  • The fear of change!
  • The linear relationship between status and managed headcount.
  • And most importantly, a severe shortage of sufficiently talented engineers and architects that have the vision and determination to drive such changes through their organizations – (Paremus referring to these rather special individuals as Samurai).

So if you are a frustrated Samurai, contact us at Paremus, we can introduce you to many like minded individuals πŸ™‚

Meanwhile, if you are a CEO / CIO with the desire to tackle the root causes of your organizations IT complexity – why not drop me an e-mail, and we’ll explain how we might be able to help; specifically you may find the dramatic impact that Infiniflow has on operational cost of great interest.

Adapt and Evolve 2007-07-06 09:02:00

We live in exciting times!

Java EE 6 is announced. The Interface21 folks think its finally “right”, and the daggers are drawn as the old JBoss boys feel the need to defend their position as popular open source JEE appserver vendor (see theserverside).

Extensibility and Profiling are a couple of key features in Java EE 6.

Mmmm. So I can take my very bloated Java EE infrastructure and reduce it to merely bloated.

I’m almost sold on the idea πŸ˜‰

But hang on? What about OSGi and SCA. Can I not already dynamically build very sophisticated distributed composite applications that adapt and evolve to their resource landscapes? Such distributed application services only running loading and running what is required at each specific point in time. These solutions self-managing, self-configuring and self healing?

Well actually, yes I can – and Java EE – in any form – doesn’t figure!

On a finishing note – a nice article (concerning Web Services) whose underlying message is, I’d suggest, as equally applicable to the monolith Java EE v.s. composite OSGi / SCA debate.

Adapt and Evolve 2007-04-11 09:47:00

An new white paper concerning the synergies between OSGi, SCA and Spring can be found on the OSOA site, well worth reading for those you want and introduction to this field of activity.

The white paper concludes that “SCA, OSGi and Spring together provide powerful capabilities for building service implementations from simple sets of simple Java Beans using a few simple API’s”.

The one interesting omission is any real discussion about the challenges of building distributed OSGi, SCA, Spring based distributed systems. Whilst the white paper explains the virtues of dynamic dependency resolution, this is only within the context of a static resource landscape, and so fails to acknowledge the additional changes presented by Peter Deutsch’s 8 Fallacies of Distributed Computing.

For those interest in dynamic distributed systems based on OSGi and SCA that support Spring, check out the Newton project. Newton is itself built from the ground up to be a robust distributed runtime environment using OSGi, SCA and Jini as foundation technologies, providing a “robust” – in the true non-marketing sense of the word – enterprise runtime platform for Java Pojo based applications including Spring.

A final thought – there still seems to be a real lack of understanding within the industry w.r.t. the fundamental relationship between agility, distributed systems, complexity and OPEX (operational expenditure). A subject I suspect I’ll post more on, once I’ve caught up with my day job.