OSGi? With Nimble? Yes Please!

You’ll need more than logic to persuade people of your case

For believers in rationality, the modern world is often a frustrating and bewildering place.

New Scientist – 10 November 2010

Never-the-less, we keep trying.

Me.

Yesterday Paremus  jointly announced with MakeWave the ‘Nimble Distribution‘. I’d like to take this opportunity to thank both Paremus and MakeWave teams for all the effort and late nights that have gone into making this happen.

Paremus will continue to actively develop Nimble capabilities throughout 2011; with remote services being one of the areas that will receive ongoing attention. To track these developments consider signing up to the Nimble Forum. This will be on-air over the Christmas holidays. All feedback and suggestions are most welcome, so don’t be shy.

However, perhaps of  greater importance, is the commercial aspect of the Nimble announcement.

For those organisations that understand:

  • The medium term efficiencies and transformative business value gained from modularisation and dynamic system assembly.
  • Their business requirements cannot be met by a WAR file deployed to Tomcat (Sigh).
  • The necessity of strong industry based standards (OSGi) shepherded by a strong democratic standards body (OSGi Alliance).
  • The value of high quality product ready implementations and high quality support.

We hope that Paremus / MakeWave announcement provides a compelling proposition. A high quality, elegant, agile and operationally simple OSGi runtime, bundled with high quality commercial support, tailored for the most demanding of business requirements and environments.

Finally, complementing our ongoing Nimble & Service Fabric activities; Paremus will be working closely with Neil Bartlett and other BNDtools contributors through 2011 to ensure that OSGi has the highest quality tooling support possible.

In the meantime…

Seasonal Greetings & a Nimble 2011 to you all 😉

Richard

All in all its been a busy Summer

Despite the challenges posed by the global economy, 2010 is proving to be an interesting year with a number of progressive organisations preparing for some form of OSGi migration. What is common across these organisations? From Paremus experiences, its not the markets they operate in, but rather the mind-sets of the in-house developers and systems architects.

This bodes well for OSGi adoption and the talented engineers that are driving this. Want to be worth you weight in platinum? Then understand the potential cost savings on offer to your organisation via a convergent private Cloud, OSGi and NoSQL strategy; effectively communicate this to your business, then deliver it!

Cloud is so much more than deploying virtual machine images.

Given these commercial indicators, its perhaps no surprise that ‘cloud’ rhetoric is also evolving: away from the mindless deployment of static virtual machine images, towards the dynamic assembly of distributed services. Indeed, the message above, one which Paremus have been attempting to convey since our earliest Service Fabric endeavours, received recent backing from none other than  VMware’s CEO Paul Maritz and CTO and Senior Vice President of Research and Development Steve Herrod. In a similar vein, Canonical’s Ensemble project recently announced that it foregoes virtual image distribution; Ensemble instead managing dependencies, deployment, and provisioning of applications on Ubuntu clouds.

Hence, in hindsight,  the OSGi Cloud workshop hosted at EclipseCon earlier this year and the follow-on Cloud RFP-133 activities, seem well timed. Whilst this is still a ‘work in progress’, the end result will hopefully be of great relevance to many organisations planning their first generation of private Cloud.

Back home, its been a very busy summer for Paremus!

We recently released version 1.6 of the Service Fabric; this an important internal milestone as several Nimble capabilities are now fully utilised. One new feature, dynamic update and roll-back of running Systems, is nicely demonstrated in the latest fractal example.

Demonstration of dynamic System update and roll-back capabilities introduced in the Service Fabric 1.6 release

Demonstration of dynamic System update and roll-back capabilities introduced in the Service Fabric 1.6 release

Meanwhile, if the OSGi Alliance Enterprise 4.2 specification Remote Service Administration (RSA) caught your attention/imagination, you may be interested in the new Paremus’ RSA  implementation which is nearing completion. Our RSA stack will support fully plug-able discovery, topology manager and data provider components, and Paremus will be supporting some interesting options in each of these areas. Alongside this, the team have also been hard at work on release 1.0 of the SIGIL Eclipse IDE plug-in and on a suite of enhanced Nimble capabilities.

Further details on each of these will be posted over the next few weeks, so stay tuned.

Finally, those of you who are able to attend the OSGi Community Event in London next week may be interested in attending David Savage’s OSGi & Private Cloud’s’ presentation. A number of the Paremus team will be at the presentation, so feel free to stop by and say hello!

Happy New Year

2009 was an interesting year for Paremus. Despite the bitter economic climate, OSGi finally captured the imagination of the software industry: with this, in conjunction with the intensifying ‘Cloud Computing‘ drum-beat, came an increased appreciation of capabilities and benefits brought by the Paremus Service Fabric.

In the closing months of 2009, Paremus released Nimble as a stand-alone product. A state-of-the-art dependency resolution engine,  Nimble’s mantra ‘Making Modularity Manageable’, struck a chord with many of you; and I’d like to thank you all for the extremely positive reception Nimble received.

For those interested in private, public and hybrid ‘Clouds’, the year also closed with a interesting series of Christmas posts (or lectures 🙂 ) by Chris Swan charting his successful attempt to create a hybrid cloud based ‘platform as a service’ by combining the Paremus Service Fabric with Amazon EC2.

So what can we expected in 2010? From a software industry perspective, be prepared for some fundamental shifts as the industry really starts to grapple with modularisation, cloud computing and when used in combination, what they really mean! As usual Kirk Knoernschild captures the moment in his latest post ‘A New Year’s Declaration

Whilst not wanting to give the game away, I can say that Paremus will be making a number of interesting announcements over the coming months. Those of you who are interested OSGi should keep a close eye on Nimble via the Nimble blog. Meanwhile, those interested in low latency computing, private Cloud Computing and innovative implementations of OSGi EEG standards work, should keep a watching brief on the Paremus Service Fabric.

Enough said.

Announced in 2005, the Newton project was, by several years, the industries first distributed SCA / OSGi distributed runtime platform! Since then, Newton has indirectly influenced industry standards, and many of our competitors roadmaps. However, since early 2008, our commercial Service Fabric has rapidly evolved beyond the initial concepts encapsulated within Newton. Given this, we start 2010 by announcing that that Newton will be archived and the CodeCauldron open source community closed.

In due course, a number of new OSS projects will be announced on the new Paremus community site. The CodeCauldron OSS experience and these future initiatives will be the subject of subsequent posts.

In the meantime, I wish Paremus customers, partners and friends all the very best for 2010.

Richard

OSGi: The Value Proposition?

In a recent blog Hal Hildebrand argues OSGi’s value proposition in terms of its ability to reduce long term ‘complexity‘. Hal argues that whilst it may be harder to start with OSGi as it initially appears more complex, for large applications and large teams it is ultimately simpler because the architecture is ‘modular’. A diagram along the lines of the following is used to emphasis the point.

Complexity Over Time?

Complexity Over Time?

As an ex-Physicists, I’m naturally interested in concepts such as ‘Complexity’, ‘Information’ and ‘Entropy’; and while I agree with Hal’s sentiments, I feel uneasy when the ‘complexity’ word is used within such broad brush general arguments. Indeed; I find myself asking, in what way is a modular system ‘simpler’? Surely a modular system exposes previously hidden internal structure, and while this is ‘necessary complexity’ (i.e information describing the dependencies in the composite system), the system is never-the-less visibly more complex!

For those interested, the following discussion between physicists at a Perimeter Institute seminar concerning ‘information’ is amusing, illuminating and demonstrates just how difficult such concepts can be.

Before attempting to phrase my response, I visited Kirk Knoernschild’s blog – IMO one of the industries leading experts in modularisation – to see what he had to say on the subject.

Sure enough Kirk states the following:

As we refactor a coarse-grained and heavyweight module to something finer-grained and lighter weight, we’re faced with a set of tradeoffs. In addition to increased reusability, our understanding of the system architecture increases! We have the ability to visualize subsytems and identify the impact of change at a higher level of abstraction beyond just classes. In the example, grouping all classes into a single module may isolate change to only a single module, but understanding the impact of change is more difficult. With modules, we not only can assess the impact of change among classes, but modules, as well.

Hence, Kirk would seem to agree. As one modularises an application, complexity increases in the form of exposed structural dependencies. Note that one must be careful not to confuse this necessary complexity with accidental complexity; a subject of previous blog entries of mine – see Complexity Part I & Part II

OSGi – Preventing ‘System Rot’?

Those who have worked in a large enterprise environment will know that systems tend to ‘rot’ over time. Contributing factors are many and varied but usually include:

  • Structural knowledge is lost as key developers and architects leave the organisation.
  • Documentation missing and / or inadequate.
  • The inability to effectively re-factor the system in response to changing business requirements.

The third issue is really a ‘derivative’ of the others: As application structure is poorly understood, accidental complexity is introduced over time as non-optimal changes are made.

Hence, rather than trying to frame OSGi’s value proposition arguments in terms of ‘complexity’ – OSGi’s value is perhaps more apparent when framed in terms of ‘necessary information’ required to manage and change systems over time?

Structural information loss over time for modular and non-modular System

Structural information loss over time for modular and non-modular System

Unlike a traditional system, the structure of a modular System is always defined: The structural information exposed by a correctly modularised system being the necessary information (necessary complexity) required for the long term maintenance of that System.

In principle, at each point in time:

  • The components used within the System are known
  • The dependencies between these components are known
  • The impact of changing a component is understood

However, the value of this additional information is a function of the tooling available to the developer and the sophistication of the target runtime environment.

The Challenge: Simplifying while preserving Flexibility

Effectively dealing with both module and context dependencies is key to realizing OSGi’s true value in the enterprise.

To quote Kirk yet again:

Unfortunately, if modules become too lightweight and fine-grained we’re faced with the dilemma of an explosion in module and context dependencies. Modules depend on other modules and require extensive configuration to deal with context dependencies! Overall, as the number of dependencies increase, modules become more complex and difficult to use, leading us to the corollary we presented in Reuse: Is the Dream Dead:

The issue of module dependency management is well understood. Development tooling initiatives are underway to ease module dependency management during the development process; an example of which being the SIGIL project recently donated by Paremus to the Apache Felix.

However, Kirk’s comment with respect to ‘context dependencies‘ remain mostly unheard.
From a run time perspective vendors and early adopters currently adopt one of the following two strategies:

  • Explicit management of all components: Dependency resolution is ‘frozen in’ at development time. All required bundles, or a list of required bundles, are deployed to each runtime node in the target runtime environment; i.e. operations are fully exposed to the structural dependencies / complexities of the application
  • Use of an opaque deployment artifact: Dependency resolution is again ‘frozen in’ at development time. Here the application is ‘assembled’ at development time and released as a static opaque blob into the production environment. Operations interact with this release artifact, much like today’s legacy applications. While the dependencies are masked, as the unit of deployment is the whole application, this decreases flexibility, and if one considers the ‘Re-use Release Equivalence Principle’ partly negates OSGi’s value proposition with respect to code re-use.

Both of these approaches fail with respect to Kirk’s ‘context dependencies’. As dependencies are ‘frozen in’ at development time there is no ability to manage ‘context’ dependencies at runtime. Should conditions in the runtime environment for whatever reason require a structural change; a complete manual re-release process must be triggered. With these approaches, operational day to day management will at best remain painful.

In contrast, leveraging our Nimble resolver technology Paremus pursue a different approach:

  • The runtime environment – a ‘Service Fabric’ – is model driven. Operations release and interact with a running Service via its model representation; this an SCA description of the System. Amongst other advantages, this shields the operations staff from unnecessary structural information.
  • The Service Fabric dynamically assembles each System resolving all modules AND context dependencies.
  • Resolution policies may be used to control various aspects of the dynamic resolution process for each System; this providing a higher level policy based hook into runtime dependency management.
  • The release artifacts are OSGi bundles and SCA System descriptions – conforming with the ‘re-use / release equivalence principle’.
  • The inter-relationship between all OSGi bundles and all Systems with the Service Fabric may be easily deduced.

The result is a run time which is extremely flexible, promotes code-reuse, whilst is significantly easier to manage than traditional environments. OSGi is an important element, but the use of a high level structural description used in conjunction with the model driven runtime are also essential elements of this story.

OSGi: The Value Proposition?

The short answer really is – “it depends on how you use it”!

Without a doubt, many will naively engage with OSGi, and will unwittingly increase operational management complexity beyond any benefits achieved by application modularization; see ‘OSGi here, there and everywhere’. However, for those that implement solutions that maximize flexibility and code-reuse, while minimizing management, OSGi’s value proposition is substantial; and the runtime used is a critical factor in realising these benefits.

How Substantial?

To date my only benchmark is provided by an informal analysis made by a group of architects at a tier 1 Investment Bank in 2008. They estimated the potential OPEX cost saving per production application, assuming that it were replaced with a Service Fabric equivalent; for the purpose of this blog one may equate Service Fabric to adaptive, distributed OSGi runtime.

Cost savings in terms of

  • application release efficiency.
  • ongoing change management,
  • fault diagnostics and resolution
  • efficiency savings through code re-use

were estimated. The final figure suggested a year on year OPEX saving of 60% per application. Somewhat surprised at the size of the estimate I’ve challenge the group on several occasions, each time the response was that the estimates were conservative.

To turn this into some real numbers – consider the following. A tier 1 investment bank may have as many as ~1000 applications; each application typically costing $1m per annum. Lets assume that only 30% of the applications are suitable for migrating to the new world – we’re still looking at a year on year saving of $200m. Migration costs are not included in this, but these are short term expenses. Likewise neither are the cost savings realized by replacing legacy JEE Application Server and middleware with the Service Fabric solution.

As always – ‘mileage may vary’ – but never the less, quite a value proposition for OSGi!