A Little Early

Whilst recently writing up a white paper, I idly spent sometime looking through my usual archive – the Internet (anything to avoid writing). :-/

When did we (Paremus) first announce distributed OSGi again? Answer, not 2009 as one believe if you listened to all the IT vendor noise about RFC119 – but in December 16th 2005.

OK – we were a little early 🙂

This press release even had a quote from Jon Bostrom. Jon, six years early in 1998 actually visited Salomon Brothers UK to provide a Jini train course too, what turned out to be, a proto-Paremus team.

This morning I was alerted to a blog concerning Jini and OSGi which I dually half-read, then responded. Then realized that the blogger had actually reference a short 5 minute talk I gave at the Brussels JCM 10 Jini event September 2006. As the message from this presentation had been ignored by the community since that point – I has somewhat surprised / pleased to see it referenced.

My message at the time was simple and quite unpopular…

To survive and flourish Jini must embrace OSGi

The other thing that sprang to mind was Jim Waldo’s presentation at the same conference. Unlike mine, this widely report with great enthusiasm; I really don’t mind Jim:)

The interesting thing was – at least to my mind – one of Jim’s most profound comments seemed to be missed by most.

Program v.s. Deploy – we’ll put the management in later

This struck particular resonance with the Paremus engineering team – as our dynamic target state provisioning sub-system for Infiniflow had been released earlier that very year. This leveraging those very ideas!

Its now 2009 – we have the industry has defined the relevant required standards for distributed OSGi based frameworks. Now the industry is wondering how to develop, deploy and manage runtimes that consist of 1000’s of dynamical deployed bundles running on a Cloud of Compute resource.

No problem! Paremus have been doing that for half a decade 😉

Conclusions? Nothing profound. Perhaps the slow pace of the IT industry? But isn’t the Internet a great communal memory!

Teleport or Telegraph?

If this blog entry were chiseled in stone, no currently existing technology would be capable of near instantaneous transportation of that stone. Perhaps quantum entanglement might one day provide the basis for Teleportation – yet much serious physics and engineering would be required to make this more than Science Fiction.

Yet the same information – in an binary format (Morse) – could have be transmitted across a continent at near the speed of light over a hundred years ago.

Both approaches achieve the same result – transmission of information.

Sometimes identifying the correct approach, the correct perspective, is far more important than the amount of engineering effort you throw at a problem.

Which brings me to the following article

So VMware need 2,000 people to build a resource orchestration layer? Certainly, trying to manage a resource landscape so that it appears unchanging to a population of legacy applications is extremely difficult!

The alternative?

Take a different perspective.

Build dynamic / agile applications that adapt to the changing characteristics of their operational environments.

Global Financial Meltdown and Google Mail Service Outage

Whilst the current global economic meltdown and the recent <a href=”http://news.bbc.co.uk/1/hi/technology/7907583.stm”>Google e-mail</a> service outage may seem entirely different types of event, there is some degree of commonality. Both represent catastrophic cascading failure within large complex distributed systems.

The analogy unfortunately finishes there.

Google were up and running again in a couple of hours whilst the worlds economies may take a decade to recover. However the central theme –  how to avoid systemic catastrophic failure within complex systems – remains of deep concern to system architects and economists alike.

Where does that leave “Cloud Computing”. Quite simply don’t believe the hype. Public Cloud infrastructures will continue to fail, hopefully infrequently, but almost certainly in a spectacular manner. The next generation for Public Cloud will need to be built upon a more modular resources landscape (swarms of geographically dispersed meshed data centre nodes) – with a suitably advanced distributed & partition-able Cloud Operating System.

Unfortunately the same is true of the current generation of Grid Provisioning and Virtualization Management Software solutions increasingly used by large corporations. Use of this technology will end in tears for a number of large IT departments. To much visible complexity, too little automation. Like the economic meltdown, these solutions fail to account for outlier risks which cause systemic failure within complex systems.

The answer? Well its not a programming language (sorry Erlang!), nor a specific piece of middleware, nor specific replication technology, nor classic clustering.


To start the journey one must first realize that…


Agility and Robustness are simply two faces of the same coin.

Forget Cloud – OSGi is the new Cool Thing!

Or so an Industry Analyst recently informed me.

Yet the flurry of Twittering & Blogging concerning the distributed OSGi section of the new <a href=”http://www.osgi.org/download/osgi-4.2-early-draft.pdf”>OSGi 4.2</a> specification is certainly interesting. Is OSGi approaching some sort of enterprise adoption tipping point? These along with other commercial indications imply this is likely.

This is good news. OSGi deserves to be wildly successful, OSGi is one of the key enablers for the next generation of enterprise.

Yet a danger lurks in the shadows.

The use of OSGi does not in itself guarantee any sort of coherent architecture, nor is capable of addressing the current complexity crisis with the enterprise. OSGi is simply a tool – and in the wrong hands OSGi runtime systems will seem orders of magnitude more complex than the systems they replaced. Meanwhile, the distributed OSGi section of the 4.2 specification is simply an acknowledgment that “things” exist outside the local JVM – no more – no less.

Distributed OSGi has little to say about how to address <a href=”http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing”>Deutsch’s 8 Fallacies</a> ( actually if you follow the link you’ll notice that Wikipedia now have a 9th 🙂 ). How these distributed entities discover each other, interact with each other, and which protocols are used is left as an exercise to the software vendor. This is not a criticism of the standard – this is a good thing. OSGi doesn’t constrain distributed architectures.

Yet this allows business as usual for the Software Vendors. And so we see the same old tired SOA rhetoric.

“ESB’s & WS-*, would you like OSGi with that sir?”

But joking aside – the real danger is that OSGi’s fate may become hopelessly entangled with the current disillusionment surrounding the web of vendor <a href=”http://apsblog.burtongroup.com/2009/01/soa-is-dead-long-live-services.html”>SOA Market-ectures</a>.

Paremus have always argued that OSGi deserves to be complemented by a network SOA framework that is as adaptable and dynamic as OSGi is locally within the JVM. A Self-Similar Architecture!

It was for this reason that Paremus fused OSGi (the new Cool technology) with Jini (was Jini ever Cool?) within the <a hef=”http://newton.codecauldron.org/site/index.html”>Newton project</a> in 2006. A solution, in its commercial <a href=”http://www.paremus.com/products/products.html”>Infiniflow</a> guise, which has been in customer production for over 2 years.

As for Cloud Computing – that story has only just started 😉

If only Newton were Apache….

If only Newton were Apache….

Whilst I’m on a roll…

I continue to get asked by all sorts of parities, “Why is Newton using a GPL (actually AGPL) open source license? Why isn’t it Apache?” Less frequently, the same question in another guise – “Why did Paremus set up codeCauldron rather than join Apache or Eclipse?”

This question usual emanates from one of three sources:

  • Individual Developer: Usually very excited about Newton and its capabilities; typically a Samurai – (Paremus like Samurai!). The conversation goes – “If only Newton were Apache, I could deploy it in production without sign-off”. The Samurai sees the value in the product, sees the vision – and believes (but cannot ensure) the organization will do the right thing; the right thing being to pay for support and consultancy services from the company that developed the product. Unfortunately I’ve seen exactly the opposite behavior from a number of Organisations! Many VC’s believe without questioning – the diffusion model; i.e. “Give it away and the revenue will roll in” – yes, those very words have been used. My response – I continue to watch SpringSource and MuleSource with much interest! But I predict that the VC’s in question are in for a shock!
  • The Small SI: Usually want Newton capabilities upon which they can build business specific services – but do not want to pay for the privilege. Newton is unique in its capabilities at present – so either the SI must make their own derivative code GPL, or develop the equivalent of Newton capabilities themselves, or enter a commercial relationship with Paremus. If only Newton were Apache!
  • Tier 1 Technology Vendors: Complain – “We (who shall remain nameless) cannot officially look at Newton code because it’s GPL. Implication being: We are not interested in a commercial relationship with you, rather we want to see what you Paremus folks have, try and guess where you are going, and then do it ourselves. If only Newton were Apache!

So whilst capable of generating a large footprint, the Apache license model is, I believe, a significant barrier for small innovative companies wanting to build a financial successful business, as:

  • Its easy to give something away. Trying to charge for usage a-priori – much more difficult! Again, I continue to watch SpringSource and MuleSource with much interest!
  • The giants of the Software Industry, after the Linux/JBOSS experience have become quite effective at controlling open source communities, and neutralizing potential threats to the status quo; just my paranoid observations.

Perhaps Microsoft was correct all along?

That said, companies with closed source / proprietary software products seem to make the same mistake. The market is tough, developers opt for “free open source” solutions, our ROI isn’t obvious? So give away the product based on some criteria – to customers with revenue below a certain level, or limited functionality / scale of the free product. Later – when the customer exceeds this boundary – we have them by the balls! (queue evil laugh). A viable long-term business strategy?

Again, I have my doubts.

Impaled on the Horns of an OPEX Dilemma

Impaled on the Horns of an OPEX Dilemma

The finance industry are clearly having a tough time at present. As losses mount, CEO’s & CIO’s are increasingly scrutinizing the costs of doing business. One interesting metric, the cost of running a single production application; $1,000,000 per annum! Take the many thousands of applications typically used in a large finance house, and operational costs rapidly exceeds the billion dollar per annum mark.
Why is this?

Surely, over the last few years the Finance industry has increasingly driven down the price of enterprise software, to the point that an application server may now be charged at a few hundred dollars per node. Likewise, basic networking, server and storage are cheaper than at any time in the past.

The problem isn’t the cost of the raw materials, rather the fact that these organizations have built increasingly complex environments which must be maintained by an army of IT staff.

I’m probably not far off the mark suggesting 80% of the annual cost for each application relates to support and development staff that are required to maintain and keep the application running.

And the choices available to the CxO?
  • Use Cheaper Resource: Ship operations out to China, India or Mexico! While on-paper attractive as a quick fix; there is a catch. Wages tend to normalize as time progress, with the cost of initially cost effective workforces rising to the point that the Market will bear. Indeed – it has a name; “Free Market Dynamics”. Hence within a reasonable timeframe (~5 yrs) – the cost advantage will evaporated; meanwhile the company is still left with a complex manually intensive operational environment. Traditional – third party outsourcing – of which there are several failed examples exist in the late 1999 / early 2000 period – fall into this category. This approach does nothing to address the the root cause of the spiraling operational costs – complexity! In short – a strategy guaranteed to fail in the medium / long term.
  • Reduce the Number of Applications: If the cost relates to the number of applications – simply forcing down the number of applications in use will initially reduce OPEX costs. Whilst a reasonable strategy for some, the Financial Service industry is highly adaptive and constantly needing the evolve applications and services. Hence, a “no new” applications policy merely results in bolt-ons of additional functionality to existing systems – increasing complexity and associated costs of the remaining applications.
  • Use Technology to Reduce headcount: The IT industry have collectively failed to provide real solutions to this! Despite a flood of Automated Run-Book, Monitoring, Configuration Management, Package / OS Deployment and Virtualization Management products, humans are still very much still “in-the-loop”; directly concerned with all aspects of every software service in the runtime environment. Environments are more complex than ever!

So what is stopping the IT industry developing the right products? Simply, industry continues to fail to realize that automation of the existing is not sufficient. A fundamental/radical change in perspective with respect to how distributed systems are built and maintained is needed to address the Complexity Crisis organizations now face. Funnily enough, this is whatInfiniflow has been developed to address.

And the users of the technology?

  • The fear of change!
  • The linear relationship between status and managed headcount.
  • And most importantly, a severe shortage of sufficiently talented engineers and architects that have the vision and determination to drive such changes through their organizations – (Paremus referring to these rather special individuals as Samurai).

So if you are a frustrated Samurai, contact us at Paremus, we can introduce you to many like minded individuals 🙂

Meanwhile, if you are a CEO / CIO with the desire to tackle the root causes of your organizations IT complexity – why not drop me an e-mail, and we’ll explain how we might be able to help; specifically you may find the dramatic impact that Infiniflow has on operational cost of great interest.

Henry Ford and Software Assembly

Henry Ford and Software Assembly
Having used the Henry Ford analogy on numerous occasions; it was interesting to read a recent JDJ article by Eric Newcomer.The Henry Ford analogy to software goes something like this (quoting Eric)…
“The application of the Ford analogy to software is that if you can standardize application programming APIs and communications protocols, you can meet requirements for application portability and interoperability. If all applications could work on any operating system, and easily share data with all other applications, IT costs (which are still mainly labor) would be significantly reduced and mass production achievable.”Eric suggests that despite the software industry having attempted the pursuit of software re-usability, these activities have failed. Whilst the Web Services initiative has, to some degree, increased interoperability, it has failed to deliver code re-use. Eric concludes that the whole analogy is wrong, and that rather than trying to achieve code re-use, the industry needs to focus of sophisticated tools to import code, check for conformance and ready it for deployment within the context of a particular production environment.This article triggered a number of thoughts:

  • Did the industry seriously expect WS-* to usher in a new era of code re-use? Surely Web Services are a way to achieve loose coupling between existing, and so by definition, stove-piped monolithic applications? I guess the answer here partly depends on the granularity of re-use intended?
  • Perhaps JEE should have faired better? Generic or re-usable business logic that could be deployed to a general purpose application server seems like just the thing! However, expensive bloated JEE runtimes, and the associated complexity and restrictions, prompted the developer migration to Spring.

Do these experiences really point to a fundamental issue with the idea of code re-use, or are they an indication that the standards developed by the IT industry were simply not up to the job?

If the latter, then what is actually needed? Clearly:

  • It must be significantly simpler for developers to re-use existing code relative to the effort required to cut new code for the task in hand –  thus implying:
  1. The ability to rapidly search for existing components with the desired characteristics.
  2. The ability to rapidly access and include the desired components into new composite applications.
  3. Component dependency management must be robust and intuitive both during the development cycle and during the life-time of the application in production.
  • The runtime environment must be sufficiently flexible and simple that it offers little or no resistance to developers and their use of composite applications.
  • In addition to the runtime environment insulating applications from resource failure, and providing horizontal scale, the runtime must also track all components that are in use, and the context (the composite system) in which they are used.
I’d argue that, unlike previous IT attempts, current industry initiatives are clearly moving in the right direction:

  • The OSGi service platform gives us a vendor neutral industry standard for fine-grained component deployment and life-cycle management. Several excellent OSGi open source projects are available; namely Knopflerfish , Apache Felix and Eclipse Equinox
  • Service Component Architecture (SCA) provides a vendor neutral industry standard for service composition.
  • Next generation runtime environments like Infiniflow (itself built from the ground up using OSGi and SCA) replace static stove-piped Grids, Application Servers and ESB’s with cohesive, distributed, adaptive & dynamic runtime environments.

But are these trends sufficient to usher in the new era of code re-use?

Possibly – possibly not.
Rather than viewing code re-use simply in terms of “find – compose – deploy” activities, we perhaps need one more trigger; the development framework itself should implicitly support the concept of code re-use! This message was convincingly delivered by Rickard Oberg in his presentation concerning the qi4j project at this years JFokus conference.

But what would be the impact if these trends succeed? Will the majority organizations build their applications from a common set of tried and tested shrink wrapped components? To what extent will third party components be common across organizations, or in house developed components be common across systems within organizations?

The result will almost certainly be adaptive radiation; an explosion in re-usable software components from external software companies and internal development groups. As with any such population, a power-law can be expected in terms of use, and so re-use; a few components being used by the vast majority of systems, whilst many components occupying unique niches, perhaps adapted or built to address the specific needs within a single specialist application in a single organization.

Going back to the Henry Ford analogy, whilst standardization of car components enabled the move to mass production, this was not, at least ultimately at the expense of diversity. Indeed, the process of component standardization, whilst initially concerned with the production of Ford Model Ts (black only) resulted in cars available for every lifestyle, for every budget and in any colour!

Adapt and Evolve 2008-01-17 15:34:00

LiquidFusion – Any Takers?

 
Just after I found out about the Sun’s purchase of MySQL, the news about Oracle’s acquisition of BEA filtered through.
Can this be anything other than consolidation within an aging market sector. An indication that the “one size fits all” monolithic messaging middleware /  application server era is in its twilight years?
Perhaps OSGi and SCA will, in due course, be seen as key technology enablers allowing the shift away from costly monolith middleware?