A Little Early

Whilst recently writing up a white paper, I idly spent sometime looking through my usual archive – the Internet (anything to avoid writing). :-/

When did we (Paremus) first announce distributed OSGi again? Answer, not 2009 as one believe if you listened to all the IT vendor noise about RFC119 – but in December 16th 2005.

OK – we were a little early πŸ™‚

This press release even had a quote from Jon Bostrom. Jon, six years early in 1998 actually visited Salomon Brothers UK to provide a Jini train course too, what turned out to be, a proto-Paremus team.

This morning I was alerted to a blog concerning Jini and OSGi which I dually half-read, then responded. Then realized that the blogger had actually reference a short 5 minute talk I gave at the Brussels JCM 10 Jini event September 2006. As the message from this presentation had been ignored by the community since that point – I has somewhat surprised / pleased to see it referenced.

My message at the time was simple and quite unpopular…

To survive and flourish Jini must embrace OSGi

The other thing that sprang to mind was Jim Waldo’s presentation at the same conference. Unlike mine, this widely report with great enthusiasm; I really don’t mind Jim:)

The interesting thing was – at least to my mind – one of Jim’s most profound comments seemed to be missed by most.

Program v.s. Deploy – we’ll put the management in later

This struck particular resonance with the Paremus engineering team – as our dynamic target state provisioning sub-system for Infiniflow had been released earlier that very year. This leveraging those very ideas!

Its now 2009 – we have the industry has defined the relevant required standards for distributed OSGi based frameworks. Now the industry is wondering how to develop, deploy and manage runtimes that consist of 1000’s of dynamical deployed bundles running on a Cloud of Compute resource.

No problem! Paremus have been doing that for half a decade πŸ˜‰

Conclusions? Nothing profound. Perhaps the slow pace of the IT industry? But isn’t the Internet a great communal memory!

Forget Cloud – OSGi is the new Cool Thing!

Or so an Industry Analyst recently informed me.

Yet the flurry of Twittering & Blogging concerning the distributed OSGi section of the new <a href=”http://www.osgi.org/download/osgi-4.2-early-draft.pdf”>OSGi 4.2</a> specification is certainly interesting. Is OSGi approaching some sort of enterprise adoption tipping point? These along with other commercial indications imply this is likely.

This is good news. OSGi deserves to be wildly successful, OSGi is one of the key enablers for the next generation of enterprise.

Yet a danger lurks in the shadows.

The use of OSGi does not in itself guarantee any sort of coherent architecture, nor is capable of addressing the current complexity crisis with the enterprise. OSGi is simply a tool – and in the wrong hands OSGi runtime systems will seem orders of magnitude more complex than the systems they replaced. Meanwhile, the distributed OSGi section of the 4.2 specification is simply an acknowledgment that “things” exist outside the local JVM – no more – no less.

Distributed OSGi has little to say about how to address <a href=”http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing”>Deutsch’s 8 Fallacies</a> ( actually if you follow the link you’ll notice that Wikipedia now have a 9th πŸ™‚ ). How these distributed entities discover each other, interact with each other, and which protocols are used is left as an exercise to the software vendor. This is not a criticism of the standard – this is a good thing. OSGi doesn’t constrain distributed architectures.

Yet this allows business as usual for the Software Vendors. And so we see the same old tired SOA rhetoric.

“ESB’s & WS-*, would you like OSGi with that sir?”

But joking aside – the real danger is that OSGi’s fate may become hopelessly entangled with the current disillusionment surrounding the web of vendor <a href=”http://apsblog.burtongroup.com/2009/01/soa-is-dead-long-live-services.html”>SOA Market-ectures</a>.

Paremus have always argued that OSGi deserves to be complemented by a network SOA framework that is as adaptable and dynamic as OSGi is locally within the JVM. A Self-Similar Architecture!

It was for this reason that Paremus fused OSGi (the new Cool technology) with Jini (was Jini ever Cool?) within the <a hef=”http://newton.codecauldron.org/site/index.html”>Newton project</a> in 2006. A solution, in its commercial <a href=”http://www.paremus.com/products/products.html”>Infiniflow</a> guise, which has been in customer production for over 2 years.

As for Cloud Computing – that story has only just started πŸ˜‰

Henry Ford and Software Assembly

Henry Ford and Software Assembly
Having used the Henry Ford analogy on numerous occasions; it was interesting to read a recent JDJ article by Eric Newcomer.The Henry Ford analogy to software goes something like this (quoting Eric)…
“The application of the Ford analogy to software is that if you can standardize application programming APIs and communications protocols, you can meet requirements for application portability and interoperability. If all applications could work on any operating system, and easily share data with all other applications, IT costs (which are still mainly labor) would be significantly reduced and mass production achievable.”Eric suggests that despite the software industry having attempted the pursuit of software re-usability, these activities have failed. Whilst the Web Services initiative has, to some degree, increased interoperability, it has failed to deliver code re-use. Eric concludes that the whole analogy is wrong, and that rather than trying to achieve code re-use, the industry needs to focus of sophisticated tools to import code, check for conformance and ready it for deployment within the context of a particular production environment.This article triggered a number of thoughts:

  • Did the industry seriously expect WS-* to usher in a new era of code re-use? Surely Web Services are a way to achieve loose coupling between existing, and so by definition, stove-piped monolithic applications? I guess the answer here partly depends on the granularity of re-use intended?
  • Perhaps JEE should have faired better? Generic or re-usable business logic that could be deployed to a general purpose application server seems like just the thing! However, expensive bloated JEE runtimes, and the associated complexity and restrictions, prompted the developer migration to Spring.

Do these experiences really point to a fundamental issue with the idea of code re-use, or are they an indication that the standards developed by the IT industry were simply not up to the job?

If the latter, then what is actually needed? Clearly:

  • It must be significantly simpler for developers to re-use existing code relative to the effort required to cut new code for the task in hand – Β thus implying:
  1. The ability to rapidly search for existing components with the desired characteristics.
  2. The ability to rapidly access and include the desired components into new composite applications.
  3. Component dependency management must be robust and intuitive both during the development cycle and during the life-time of the application in production.
  • The runtime environment must be sufficiently flexible and simple that it offers little or no resistance to developers and their use of composite applications.
  • In addition to the runtime environment insulating applications from resource failure, and providing horizontal scale, the runtime must also track all components that are in use, and the context (the composite system) in which they are used.
I’d argue that, unlike previous IT attempts, current industry initiatives are clearly moving in the right direction:

  • The OSGi service platform gives us a vendor neutral industry standard for fine-grained component deployment and life-cycle management. Several excellent OSGi open source projects are available; namely Knopflerfish , Apache Felix and Eclipse Equinox
  • Service Component Architecture (SCA) provides a vendor neutral industry standard for service composition.
  • Next generation runtime environments like Infiniflow (itself built from the ground up using OSGi and SCA) replace static stove-piped Grids, Application Servers and ESB’s with cohesive, distributed, adaptive & dynamic runtime environments.

But are these trends sufficient to usher in the new era of code re-use?

Possibly – possibly not.
Rather than viewing code re-use simply in terms of “find – compose – deploy” activities, we perhaps need one more trigger; the development framework itself should implicitly support the concept of code re-use! This message was convincingly delivered by Rickard Oberg in his presentation concerning theΒ qi4j project at this years JFokus conference.

But what would be the impact if these trends succeed? Will the majority organizations build their applications from a common set of tried and tested shrink wrapped components? To what extent will third party components be common across organizations, or in house developed components be common across systems within organizations?

The result will almost certainly be adaptive radiation; an explosion in re-usable software components from external software companies and internal development groups. As with any such population, a power-law can be expected in terms of use, and so re-use; a few components being used by the vast majority of systems, whilst many components occupying unique niches, perhaps adapted or built to address the specific needs within a single specialist application in a single organization.

Going back to the Henry Ford analogy, whilst standardization of car components enabled the move to mass production, this was not, at least ultimately at the expense of diversity. Indeed, the process of component standardization, whilst initially concerned with the production of Ford Model Ts (black only) resulted in cars available for every lifestyle, for every budget and in any colour!

Adapt and Evolve 2007-07-06 09:02:00

We live in exciting times!

Java EE 6 is announced. The Interface21 folks think its finally “right”, and the daggers are drawn as the old JBoss boys feel the need to defend their position as popular open source JEE appserver vendor (see theserverside).

Extensibility and Profiling are a couple of key features in Java EE 6.

Mmmm. So I can take my very bloated Java EE infrastructure and reduce it to merely bloated.

I’m almost sold on the idea πŸ˜‰

But hang on? What about OSGi and SCA. Can I not already dynamically build very sophisticated distributed composite applications that adapt and evolve to their resource landscapes? Such distributed application services only running loading and running what is required at each specific point in time. These solutions self-managing, self-configuring and self healing?

Well actually, yes I can – and Java EE – in any form – doesn’t figure!

On a finishing note – a nice article (concerning Web Services) whose underlying message is, I’d suggest, as equally applicable to the monolith Java EE v.s. composite OSGi / SCA debate.