EclipseCon 2011

The recent EclipseCon 2011 was the 4th consecutive EclipseCon conference in Santa Clara that we at Paremus have attended; and from my perspective, was the most exciting yet. One need look no further than Peter Krien’s Introduction to OSGi session to realise that the Eclipse community’s interest in OSGi continues its rapid growth.

Despite the thundering music (Thus Spake Zarathustra) from the CDO folks next door, which some assumed must be associated with my presentation, my talk on Cloud & OSGi was well attended and seemed to be well received; at least by the individuals that subsequently approached me to discuss this area in some depth.

The OSGi Alliance BoF was well attended with some interesting updates on the 4.3 release and ongoing work in the EEG group; the session concluded on an amusing note with some evil OSGi puzzles concocted by Peter Kriens and BJ Hargrave.

Whilst it was a shame that Neil Bartlett wasn’t able to attend this year, it was great to see Peter Kriens and David Savage rolling up their sleeves (metaphorically speaking) and walking interested parties through BNDtools and SIGIL tooling capabilities; explaining how these features will be combined in the very near future to create a powerful OSGi tooling solution.

For those of you that are interested, copies of the Paremus presentations (including the screen casts), will be posted here in the near future.

 

OSGi? With Nimble? Yes Please!

You’ll need more than logic to persuade people of your case

For believers in rationality, the modern world is often a frustrating and bewildering place.

New Scientist – 10 November 2010

Never-the-less, we keep trying.

Me.

Yesterday Paremus  jointly announced with MakeWave the ‘Nimble Distribution‘. I’d like to take this opportunity to thank both Paremus and MakeWave teams for all the effort and late nights that have gone into making this happen.

Paremus will continue to actively develop Nimble capabilities throughout 2011; with remote services being one of the areas that will receive ongoing attention. To track these developments consider signing up to the Nimble Forum. This will be on-air over the Christmas holidays. All feedback and suggestions are most welcome, so don’t be shy.

However, perhaps of  greater importance, is the commercial aspect of the Nimble announcement.

For those organisations that understand:

  • The medium term efficiencies and transformative business value gained from modularisation and dynamic system assembly.
  • Their business requirements cannot be met by a WAR file deployed to Tomcat (Sigh).
  • The necessity of strong industry based standards (OSGi) shepherded by a strong democratic standards body (OSGi Alliance).
  • The value of high quality product ready implementations and high quality support.

We hope that Paremus / MakeWave announcement provides a compelling proposition. A high quality, elegant, agile and operationally simple OSGi runtime, bundled with high quality commercial support, tailored for the most demanding of business requirements and environments.

Finally, complementing our ongoing Nimble & Service Fabric activities; Paremus will be working closely with Neil Bartlett and other BNDtools contributors through 2011 to ensure that OSGi has the highest quality tooling support possible.

In the meantime…

Seasonal Greetings & a Nimble 2011 to you all 😉

Richard

OSGi – The Business Drivers

Software modularization should not be considered in isolation, but rather seen within the context of several inter-related trends that have occurred over the last decade. Indeed, software modularity is merely the latest visible facet of a much larger and more fundamental technology shift, with an impact at least equivalent to the move from mainframes to client server computing in the 1980’s.

These related trends include:

  • Service Oriented Architecture (SOA) – Enabling previously rigidly coupled business systems with proprietary protocols to be expressed as “services” which can be accessed via common protocols. However the business systems themselves remained opaque and monolithic.
  • Cloud Computing –  Decoupling applications from the underlying compute resources upon which they run, allowing more efficient resource utilization and scaling.
  • Software Modularization – Most recently, replacing the opaque monolithic applications and stove-piped middleware with dynamically assembled alternatives composed from re-usable software components.

As commented on in “The Rise of the Stackless Stack” (see http://www.redmonk.com/jgovernor/2008/02/05/osgi-and-the-rise-of-the-stackless-stack-just-in-time/); these trends collectively shift the industry away from rigidly coupled, static, opaque environments; towards adaptive, loosely coupled systems which are dynamically assembled from well-defined software components that run across a fluid set of compute resource.

.

.

Modularity and Assembly – It’s not a new idea!

The concept of assembling a product from a set of well-defined re-usable components is not new. Indeed its roots can be traced back to at least 250BC with emperor Qin Shi Huang and his commissioning of the Terracotta Army (see http://en.wikipedia.org/wiki/Assembly_line). Whatever the product, the driver for modularity and subsequent assembly is to; increase and maintain quality, reduce cost and increase output. The modern archetype for modularity and assembly is the automotive industry, where extensive use of standardization, modularity and assembly results in affordable automobiles.

Likewise, the computer industry already extensively uses hardware modularization in the form of standardised CPUs, memory and disk subsystems; leading to affordable computer hardware. These concepts are also well understood by software engineers. Extracts from http://en.wikipedia.org/wiki/Programming_in_the_large_and_programming_in_the_small include:

“Small programs are typified by being physically small in terms of their source code size, are easy to specify, quick to code and typically perform one task or a few very closely related tasks very well”,

… programming in the large, coding managers place emphasis on partitioning work into modules with precisely-specified interactions…”

 

and

“… one goal of programming in the large involves setting up modules that will not need altering in the event of probable changes. This is achieved by designing modules so they have high cohesion and loose coupling.”

Coarse grained modularization of business processes (via SOA) and the subsequent re-assembly (via BPEL) has been underway for some time, however the individual applications have remained monolithic and opaque. Further progress was not possible until a strong, widely endorsed, industry standard for enterprise software modularity was available. OSGiTM provides this modularization standard.

Since 1999 the OSGi Alliance (see http://www.osgi.org) has provided standards, reference implementations and guidance for modularization and assembly best practices. Recently, the OSGi Alliance has succeeded in:

  • Recruiting the majority of enterprise software vendors.
  • Encouraging those vendors to migrate their product portfolios to OSGi.
  • Producing the OSGi Enterprise standard which provides mechanisms to integrate OSGi based applications with legacy JEE & Spring.

Today OSGi is rapidly becoming a cornerstone for any successful business system re‑engineering effort.

.

.

The Dilemma

While software modularity and dynamic application assembly are inevitable industry trends; OSGi faces the usual adoption challenges: see Innovators Dilemma for an exploration of this theme – http://www.businessweek.com/chapter.christensen.htm.

‘The Business’ will never ask for OSGi based systems, it’s an implementation detail. Yet in the fullness of time ‘The Business’ will complain when the internal IT organization is seen as; too inefficient, too expensive and /or no longer agile enough to meet tactical, let alone strategic, business objectives.

Yet, from an IT management perspective revisiting a complex but working enterprise software stack is an immediate cost and quantifiable risk.[1] Something to be avoided. The alternative, a slow death seems almost preferable, or at least somebody else’s problem. As the environment degrades, make-shift tactical solutions are implemented to ease immediate pain points caused by strategic issues. This ongoing tactical ‘reactive’, behaviour results in an explosion in operational complexity and associated OPEX. The organisation eventually becomes paralysed by its own management’s inability to ‘grasp the nettle’ and identify and address the fundamental issues.

It is suggested that this organizational behavior parallels the well known ‘Tragedy of the Commons’ (see http://en.wikipedia.org/wiki/Tragedy_of_the_commons).

Developer attitudes and/or their working environment may also act as an impediment:

  • Just Get It Done. At each point in time, the developer is pursuing, or is forced to pursue, the shortest / lowest effort / route to the immediate deliverable at the expense of longer term maintainability.
  • The Artisan: The software developer “knows better” and pursues his own approach in defiance of industry standards and accepted best practices.
  • The Luddite: The current opaque, undocumented spaghetti of code within the organisation ensures employment. Increased efficiency, self-describing dependencies and code re-use sound like a recipe for smaller more flexible development teams.

Each can be a powerful brake on change.

Yet history tells us, those who effectively embrace and leverage change, succeed. OSGi migration is neither free, nor in itself, a quick ‘silver bullet’. However OSGi is of fundamental strategic importance to any organisation whose core business involves the processing of information.

.

.

The Benefits

Quite simply, modularity localizes the impact of change, which directly leads to increased maintainability.

  • As long as the module boundaries don’t change, one can change the functionality of the module freely, without concern for breaking the wider system; i.e. the impact of any local change is prevented from leaking into the wider system.
  • Modules that perform a few, or a single function, are much easier to test exhaustively than an entire monolithic system.
  • Smaller pieces of the system can be independently versioned.
  • Specific knowledge is only required for the particular module being worked upon, along with its relationship to other modules in the system; other modules can be treated as “black boxes” that perform specific functions, without worrying about how they perform them.

The large scale structure of the migrated application, which in all likelihood was previously unknown, is now completely defined by the dependencies between the set of versioned modules. Hence:

  • It is understood which modules are and are not required!
  • By replacing or re-wiring modules, the behavior of a composite system may be rapidly changed.

From an ongoing maintenance perspective it is now possible to re-factor individual modules, or even the overall system, to systematically drive out accidental complexity (see http://theit.org/publishing/books/prof-app/19261.cfm) and so contain, or even reverse, design rot.

While these arguments are understood, many organizations require demonstrable real world examples. The challenge is that those organisations that are first movers have no real incentive to act as references, thereby losing competitive advantage.

For several years, Paremus have been working with a number of organizations who are at various stages of OSGi migration. While Paremus cannot discuss specific details, or disclose the organizations by name, we are able to reference some common themes that demonstrate real world benefits for OSGi in general, and the Paremus Service Fabric in particular.

.

.

Resource Utilisation

When working with traditional applications, a lack of information concerning required libraries results in developers having to load every possible library into their IDE. From experience, Paremus have seen this drive memory requirements as high as 40GB per developer desktop. However, once dependencies are understood, and mechanisms are in place so that only the required components are loaded, Paremus have also seen the number of artefacts reduced by an order of magnitude with corresponding machine memory savings.

For an organization with several hundred developers the saving is considerable. The saving in reduced memory footprint in Production is of course correspondingly larger.

.

.

Developer Efficiency

Developing against a monolithic application requires the developer to work and test against the complete code-base. Yet for large applications it may not be possible to test changes in the local IDE, as compile times may be hours. As a result, developers are forced to rely upon unit and integration tests that run during the nightly build cycle. The result is that the whole bug detection and rectification cycle can take days, with an increased likelihood that some issues are not found and leak into production.

In contrast, rapid testing of OSGi modules is easily achieved in the developer IDE, and Paremus have seen this test / fix cycle reduced from days to minutes.

A modular system also lends itself well to many hands being involved in its development and maintenance. It’s not necessary to understand the whole system inside-out, each individual can independently work on  small well-defined and decoupled modules. This directly translates to Increased project delivery success rates as smaller, well-contained projects have a higher success rate than larger and more poorly constrained projects.

.

.

Definitive Dependencies

Most modern build systems allow build-time dependencies to be specified and obtained from various repositories. However, different types of  build system repository specify dependencies with different formats; e.g. pom.xml, ivy.xml.

OSGi provides runtime dependency meta-data in its Require-Bundle and Import-Package bundle manifest headers. This provides definitive  industry standard dependency information that can be consumed regardless of the build system you use. Hence OSGi decouples your runtime from the source build systems; thereby avoiding meta-data lock-in and allowing  different types of build repository to co-exist.

In addition, OSGi supports package-level dependencies. These are finer-grained than the artefact-level dependencies supported by most build systems and can be programmatically determined directly from source code using tools such as Apache Felix Sigil . This avoids the errors that occur when trying to manually maintain dependency data.

Finally, OSGi supports its own repository API: OBR. A current priority for the OSGi Alliance EEG working group; OBR allows searching for package-level dependencies as well as the artefact-level dependencies used by other repositories.

.

.

Production Stability, Availability & Agility

Unless you happen to own applications that never change; stability, availability and agility are closely related concerns. The more agile the business service, the easier it is to change from one well-defined state to the next. This may be to introduce new business functionality, apply a patch, or roll back to a previously good version.

The internal structure of monolithic opaque applications is poorly understood; hence upgrades are complex and high risk. This, coupled with the long developer diagnostic / fix cycle, can result in repeated production outages and instability that spans several working days. Operations typically respond by maintaining a number of isolated horizontal silos; attempting to ensure service availability by releasing new software one silo at a time. This issue isn’t just common, its almost universal.

In contrast; with an OSGi-based runtime like the Paremus Service Fabric, each application is self‑describing; meaning that the application structure is now fully described in-terms of dependencies between versioned modules. A system may be deployed in seconds. A running system may be upgraded on-the-fly (i.e. within seconds); and may be returned to a previous well known-state just as rapidly.

.

.

Operational Risk & Governance

The implication for operational risk should be immediately apparent:

  • From regulatory and business continuity perspectives, the structure of an application is precisely known at each point-in-time; allowing all versions of the application to be rapidly re-constituted.
  • Key structural information is no longer locked within key members of staff. In principle the organisation can employee any OSGi literate developer / systems architect who can rapidly navigate the structure of the organisation’s software systems.

From a governance perspective, it is now simply task to rapidly answer the following types of question:

  • Which software license types are used within which production applications?
  • Which production applications use third party modules with an identified security vulnerability?

This information is readily available courtesy of the metadata embedded in each OSGi module and the dynamic deployment & assembly mechanisms provided by the OSGi runtime environment.

.

.

An OSGi ROI?

Anne Thomas Manes (Gartner) estimates that ongoing maintenance accounts for 92% of the total lifetime cost (TCO) of each application. Whilst hardware accounts for <10% of TCO, software maintenance accounts for ~70% of TCO; the remainder being the initial cost of developing the application; see slides 9 & 10 – SOA Symposium: Berlin, October 2010.

Given this, the current fashion for virtual machine based Cloud computing seems somewhat perplexing. The deployment of traditional opaque software stacks as virtual machine images does  NOTHING to address the issue of application maintainability.

Closer to home, a group of Financial Services engineers recently attempted to quantify the potential return realised by migrating their production environment to the Paremus Service Fabric. They conclude that OPEX savings of 60% were possible. While Paremus do not know the details of this analysis, it is likely that the following three contributing factors were considered.

  • The Paremus Service Fabric is a cost effective replacement for Application Servers, Compute Grids, CMDBs and provisioning tooling and requires less operational resource to manage it.
  • The Paremus Service Fabric, being a Private Cloud, achieves the resource utilisation and efficiency savings alluded to by many virtualisation vendors, but without the management overhead and operational risk associated with ‘virtual machine sprawl’.
  • However, given the Gartner TCO analysis, the bulk of the identified OPEX saving most likely results from the ongoing maintainability of OSGi based business applications running upon the Service Fabric runtime.

A large Financial Service organisations may have 1000+ applications, each with an average annual running cost of ~$1,000,000. This equates to an potential annual saving of $600,000,000! A significant medium term bottom line saving that surely warrants investing in a multi-year OSGi based application transformation program?

.

.

Migration Strategy

Hopefully a compelling set of arguments for adopting OSGi have been presented, but how do you actually migrate an organisation with a decade of legacy applications to a new OSGi based world?

Organisations considering this invariably start with the following questions:

  • How do we determine and then untangle dependencies in our current environment?
  • How do we move to an OSGi-centric build/test/release cycle within minimum impact on the majority of developers?
  • What level of modularity / granularity should we pursue?

In response, Paremus advise an iterative approach; the precise details dependent upon each organisation’s starting point and business objectives.

Stage I

 

1. Assemble a small high-caliber team of engineers with appropriate OSGi / modularity skills. Ensure that this team have senior management backing and representation.

2. Set up tools to determine and monitor dependencies within the organisation’s existing code base. Provide automated reporting, remove superfluous, and fix unsatisfied, dependencies.

At this early stage, the organization may have already achieved as much as an order of magnitude simplification in the code base dependencies. This in itself will improve developer productivity by reducing compile times and help increase the success of production releases.

Stage II – Tooling & Metadata

 

3. Review organisation’s standards for IDE tooling. Does current tooling support OSGi metadata and enable simple management of exposed dependencies? Review organisation’s current repository standards; will these support the OSGi Alliance OBR standard? If required, select new tooling.

4. Set up OSGi metadata for all organisations projects; this to be maintained by project developers. This metadata need not yet be used at runtime, but it can be used during the build process to monitor the progress of the modularisation effort.

 

It should now be possible to run OSGi and standard Java variants of an application side-by-side. This allows migration to progress without creating large parallel branches in source control, which are difficult to subsequently re-merge. CAPEX savings may be realised at this point, as only the required artifacts will be loaded into the developers IDE, decreasing the amount of resources required and increasing testability further.

Stage III – Runtime

5. Select candidate applications for migration based upon agility and re-usability considerations:

  • A set of applications that share a high degree of functionality / code and require frequent functional updates are excellent candidates for early migration.
  • A standalone application, which seldom, if ever changes, and shares little or no functionality with other applications, is a very low priority – and may never be migrated.

6. Create working runtime bundles using existing libraries as the level of granularity.

7. Create integration tests for use during and after migration to assert that parallel development streams do not break any modularisation efforts as the code is migrated.

8. Test OSGi version of candidate application in an OSGi runtime environment; e.g. Paremus Service Fabric.

9. Create integration tests to ensure fixes for hidden gotchas are caught during development rather than in production; i.e. Development and UAT fabrics.

The organization should now be in a good position to deploy OSGi based applications to production and accelerate OSGi migration. The new development, build, test. release lifecycle will enable considerable improvements in developer productivity via further reductions in time to run unit and integration tests.

Stage IV – Iterate and Reward

10. For each application take an iterative approach to modularisation; break down the deliverable into smaller modules (improving modular boundaries), test, deploy, release to production. Then re-visit modularity, break down further if appropriate, test, release.

11. For each migrated application, using the dependency information now available, publish ‘composition reports’; listing degree of re-used of in-house modules, certified third part modules and alerting on uses of non-certified modules.

12. The modularisation process will now be running across many candidate applications in parallel; during which the core OSGi team will continue to advise each participating application group on appropriate levels of modularity and opportunities for re-use.

13. Reward application teams; not just for meeting an initial delivery objectives, but also on:

  • Use of agreed third party OSGi libraries, where appropriate.
  • Achieve a high degree of re-use in a re-factored applications.
  • Delivering and maintaining in-house modules which are re-used in many in-house applications.

The ‘composition reports’ and associated incentives provide the development teams with powerful feedback mechanisms.

In this manner, the organisation typically starts with an isolated application, building the required skills & processes. Driven by initial successes and cost savings, adoption naturally flows from this initial seed point through the organisation.

.

.

To Conclude

Whether OSGi is destined to be the next IT ‘fashion bubble’ (once the current hysteria on ‘Cloud’ has waned) or it will grow organically via initial adoption by the most sophisticated IT organizations, is unclear.

However, the software industry is a manufacturing industry; no more and no less than manufacturers of disk drives, steel or automobiles.  While the raw materials are low cost, the ongoing effort required to craft and maintain flexible, high quality software, is not.

Necessity will drive the software industry, and those organisations with large in-house development teams, towards modularization, dynamic application assembly and so OSGi.


[1]The author has had direct experience of this behaviour in an enterprise environment.

A backup solution that was initially designed for a hand full of servers was nearing collapse. Each time a new backup server was added to address over-runs the reduction in backup time was less than the 1/n expected as loads could not be exactly balanced across backup real estate. The situation degraded to the point where the backups extended to a full 24 hours. The correct solution was to refresh the enterprise network and consolidate backup servers into centralized high speed silos. Yet, despite the operational risk, some management refused to sign off on what was perceived to be a high risk project. Only when no ‘last’ silver bullet could be found was sign-off achieved to progress the correct strategic solution. The situation had by then become so serious that this solution had to be implemented as rapidly as possible.

Dependencies – from Kernel to Cloud

The recent OSGi community event in London proved interesting from a couple of perspectives.

Dependencies – from Kernel to Cloud

Paremus engineers are no strangers to the idea of OSGi Cloud convergence. Having realised the potential of dynamic software assembly and resource abstraction in 2004, Paremus have had the opportunity to engineer a robust & architecturally coherent solution that delivers this vision.

Demo

Arthur C Clark’s third law: Any sufficiently advanced technology is indistinguishable from magic.

These concepts were presented by David Savage in his OSGi community talkOSGi & Private Clouds;  which included a live demonstration of the Service Fabric assembling a distributed trading system across a group of dynamically discovered compute resource.

Behind the scenes the demonstration involved  the use of the Paremus OBR resolver (Nimble) and  the new  Paremus RSA stack (configured to use SLP discovery and an asynchronous RMI data provider). Perhaps not magic, but very cool none the less 🙂

So I’d argue that Paremus have the OSGi Cloud piece of the puzzle; but what about “the Kernel”?

Well, one need only watch Jim Colson’s keynote , and the heavy referencing of the Apache Harmony JVM project (see Harmony), to see the potential for  OSGi in modularising the JVM.

Whilst the demonstration, consisting of the manual customisation of a JVM, was interesting; to my mind the full potential is only realised when such customisation is dynamic and in response to business requirements. To achieve this one must manage not only OSGi transitive dependencies, but also service dependencies and most importantly environmental runtime dependencies. Luckily, Paremus have these capabilities with Nimble; so perhaps in due course we’ll take a closer look at Harmony.

But then, why stop at Java?

Nimble is not just an OSGi resolver but a general dependency resolver, with extensible dependency types and deployable types. Debian is a well defined packaging standard which defines dependencies and installation from a Debian package repository is already standard practise. The Canonical ‘Ensemble‘ project is attempting exactly this: dynamic download of interdependent Debian artefacts and subsequent service provisioning.

So, are we witnessing “The Rise of the Stackless Stack” and the end of the industries infatuation with hordes of virtual machine images with static pre-baked software and configurations? I think so.

That said, contrast if you will this “Stackless Stack” vision with the Oracle’s big news:

Oracle Exalogic Elastic Cloud is the world’s first and only integrated middleware machine—a combined hardware and software offering designed to revolutionize datacenter consolidation.”

Expensive big iron, crammed to the brim with unwanted middleware. The contrast really couldn’t be greater!

Boiling the Frog

One frequently hears complaints about OSGi:

  • Its too difficult!
  • What is the immediate business benefit?

The first is simple to address. No excuses – use Nimble.

The second complaint has, in my opinion, more to do with human nature than technology. All too frequently instant gratification is selected over long term health. In a similar way, for many organisations, at each point in time, the operational pain isn’t sufficient to warrant fundamental change in behaviour: it just slowly keeps getting worse.

Hence the “Boiling the Frog” analogy.

The venture capital community understand this only too well.

Sell them the Aspirin!

And at all cost ignore the existing gordian knot inadvertently created from 20 years of IT quick fixes.

To construct Pieranski's knot, you fold a circular loop of rope and tie two multiple overhand knots in it. You then pass the end loops over the entangled domains. Then you shrink the rope until it is tight. With this structure, there is not enough rope to allow the manipulations necessary to unravel it.

To construct Pieranski’s knot, you fold a circular loop of rope and tie two multiple overhand knots in it. You then pass the end loops over the entangled domains. Then you shrink the rope until it is tight. With this structure, there is not enough rope to allow the manipulations necessary to unravel it.

Hence it was a sheer delight to listen to Eric Newcomer’s (chief architect at Credit Suisse) presentation on his bank’s considered approach to SOA and software modularisation. To summarise – Architecture is important, OSGi is important! No quick fixes, but an ongoing coordinated strategy that is intended to serve the business for the next decade.

I’m sure Credit Suisse will succeed, and I know Credit Suisse are not the only organisation starting down this path.

OSGi is a fantastic enabler, but its not magic. It can be a bitter medicine; but with a coherent OSGi / Cloud based strategy and disciplined implementation, these technologies will transform your operational cost base and your business.

Zen and the Art of Cloud Computing

Change is inevitable. Change is constant.

Benjamin Disraeli

I used the enclosed “Cloud Computing” slide set to summarize the Paremus position with respect to Cloud at the OSGi Cloud workshop (EclipseCon 2010 – organised by Peter Kriens).

The slides attempted to communicate the following fundamentals:

  • The Inevitability of Change
  • The strong fundamental relationship between Agility and Robustness
  • The need to Simplify through Abstraction

The implications being that:

  • Clouds’ need to be reactive runtimes; able to dynamically assemble and maintain themselves, as well as the composite business services which run upon them.
  • Modularity through OSGi is the key enabler.
View more presentations from mfrancis.

To explore these concepts further I will be publishing a series of short blog articles using the ‘Zen and the Art of Cloud Computing’ theme. Each article concerned with a specific idea, and how this is realized within the context of the Paremus Service Fabric.

Stay tuned….

OSGi: The Value Proposition: Part II

Spiralling OPEX – caused by epidemic levels of complexity and rigid / frail business systems – will ensure the success of enterprise OSGi!

That’s all it takes, really. Pressure and time

The Shawshank Redemption

In OSGi: The Value Proposition – I argued that modularisation was a fundamental requirement if one is to avoid long term system rot and thereby avoid excessive on-going maintenance costs.

Kirk Knoernschild’s keynote presentation at the London JAX conference this week promises a fascinating exploration of this and related themes.

Quoting from Kirk’s latest post concerning his JAX keynote

As a system evolves, its complexity increases unless work is done to maintain or reduce it.

and concludes…

For a long time, a central ingredient has been missing. But not for much longer, because the enterprise will get its OSGi!

Well worth attending if you are able!

Happy New Year

2009 was an interesting year for Paremus. Despite the bitter economic climate, OSGi finally captured the imagination of the software industry: with this, in conjunction with the intensifying ‘Cloud Computing‘ drum-beat, came an increased appreciation of capabilities and benefits brought by the Paremus Service Fabric.

In the closing months of 2009, Paremus released Nimble as a stand-alone product. A state-of-the-art dependency resolution engine,  Nimble’s mantra ‘Making Modularity Manageable’, struck a chord with many of you; and I’d like to thank you all for the extremely positive reception Nimble received.

For those interested in private, public and hybrid ‘Clouds’, the year also closed with a interesting series of Christmas posts (or lectures 🙂 ) by Chris Swan charting his successful attempt to create a hybrid cloud based ‘platform as a service’ by combining the Paremus Service Fabric with Amazon EC2.

So what can we expected in 2010? From a software industry perspective, be prepared for some fundamental shifts as the industry really starts to grapple with modularisation, cloud computing and when used in combination, what they really mean! As usual Kirk Knoernschild captures the moment in his latest post ‘A New Year’s Declaration

Whilst not wanting to give the game away, I can say that Paremus will be making a number of interesting announcements over the coming months. Those of you who are interested OSGi should keep a close eye on Nimble via the Nimble blog. Meanwhile, those interested in low latency computing, private Cloud Computing and innovative implementations of OSGi EEG standards work, should keep a watching brief on the Paremus Service Fabric.

Enough said.

Announced in 2005, the Newton project was, by several years, the industries first distributed SCA / OSGi distributed runtime platform! Since then, Newton has indirectly influenced industry standards, and many of our competitors roadmaps. However, since early 2008, our commercial Service Fabric has rapidly evolved beyond the initial concepts encapsulated within Newton. Given this, we start 2010 by announcing that that Newton will be archived and the CodeCauldron open source community closed.

In due course, a number of new OSS projects will be announced on the new Paremus community site. The CodeCauldron OSS experience and these future initiatives will be the subject of subsequent posts.

In the meantime, I wish Paremus customers, partners and friends all the very best for 2010.

Richard

OSGi: The Value Proposition?

In a recent blog Hal Hildebrand argues OSGi’s value proposition in terms of its ability to reduce long term ‘complexity‘. Hal argues that whilst it may be harder to start with OSGi as it initially appears more complex, for large applications and large teams it is ultimately simpler because the architecture is ‘modular’. A diagram along the lines of the following is used to emphasis the point.

Complexity Over Time?

Complexity Over Time?

As an ex-Physicists, I’m naturally interested in concepts such as ‘Complexity’, ‘Information’ and ‘Entropy’; and while I agree with Hal’s sentiments, I feel uneasy when the ‘complexity’ word is used within such broad brush general arguments. Indeed; I find myself asking, in what way is a modular system ‘simpler’? Surely a modular system exposes previously hidden internal structure, and while this is ‘necessary complexity’ (i.e information describing the dependencies in the composite system), the system is never-the-less visibly more complex!

For those interested, the following discussion between physicists at a Perimeter Institute seminar concerning ‘information’ is amusing, illuminating and demonstrates just how difficult such concepts can be.

Before attempting to phrase my response, I visited Kirk Knoernschild’s blog – IMO one of the industries leading experts in modularisation – to see what he had to say on the subject.

Sure enough Kirk states the following:

As we refactor a coarse-grained and heavyweight module to something finer-grained and lighter weight, we’re faced with a set of tradeoffs. In addition to increased reusability, our understanding of the system architecture increases! We have the ability to visualize subsytems and identify the impact of change at a higher level of abstraction beyond just classes. In the example, grouping all classes into a single module may isolate change to only a single module, but understanding the impact of change is more difficult. With modules, we not only can assess the impact of change among classes, but modules, as well.

Hence, Kirk would seem to agree. As one modularises an application, complexity increases in the form of exposed structural dependencies. Note that one must be careful not to confuse this necessary complexity with accidental complexity; a subject of previous blog entries of mine – see Complexity Part I & Part II

OSGi – Preventing ‘System Rot’?

Those who have worked in a large enterprise environment will know that systems tend to ‘rot’ over time. Contributing factors are many and varied but usually include:

  • Structural knowledge is lost as key developers and architects leave the organisation.
  • Documentation missing and / or inadequate.
  • The inability to effectively re-factor the system in response to changing business requirements.

The third issue is really a ‘derivative’ of the others: As application structure is poorly understood, accidental complexity is introduced over time as non-optimal changes are made.

Hence, rather than trying to frame OSGi’s value proposition arguments in terms of ‘complexity’ – OSGi’s value is perhaps more apparent when framed in terms of ‘necessary information’ required to manage and change systems over time?

Structural information loss over time for modular and non-modular System

Structural information loss over time for modular and non-modular System

Unlike a traditional system, the structure of a modular System is always defined: The structural information exposed by a correctly modularised system being the necessary information (necessary complexity) required for the long term maintenance of that System.

In principle, at each point in time:

  • The components used within the System are known
  • The dependencies between these components are known
  • The impact of changing a component is understood

However, the value of this additional information is a function of the tooling available to the developer and the sophistication of the target runtime environment.

The Challenge: Simplifying while preserving Flexibility

Effectively dealing with both module and context dependencies is key to realizing OSGi’s true value in the enterprise.

To quote Kirk yet again:

Unfortunately, if modules become too lightweight and fine-grained we’re faced with the dilemma of an explosion in module and context dependencies. Modules depend on other modules and require extensive configuration to deal with context dependencies! Overall, as the number of dependencies increase, modules become more complex and difficult to use, leading us to the corollary we presented in Reuse: Is the Dream Dead:

The issue of module dependency management is well understood. Development tooling initiatives are underway to ease module dependency management during the development process; an example of which being the SIGIL project recently donated by Paremus to the Apache Felix.

However, Kirk’s comment with respect to ‘context dependencies‘ remain mostly unheard.
From a run time perspective vendors and early adopters currently adopt one of the following two strategies:

  • Explicit management of all components: Dependency resolution is ‘frozen in’ at development time. All required bundles, or a list of required bundles, are deployed to each runtime node in the target runtime environment; i.e. operations are fully exposed to the structural dependencies / complexities of the application
  • Use of an opaque deployment artifact: Dependency resolution is again ‘frozen in’ at development time. Here the application is ‘assembled’ at development time and released as a static opaque blob into the production environment. Operations interact with this release artifact, much like today’s legacy applications. While the dependencies are masked, as the unit of deployment is the whole application, this decreases flexibility, and if one considers the ‘Re-use Release Equivalence Principle’ partly negates OSGi’s value proposition with respect to code re-use.

Both of these approaches fail with respect to Kirk’s ‘context dependencies’. As dependencies are ‘frozen in’ at development time there is no ability to manage ‘context’ dependencies at runtime. Should conditions in the runtime environment for whatever reason require a structural change; a complete manual re-release process must be triggered. With these approaches, operational day to day management will at best remain painful.

In contrast, leveraging our Nimble resolver technology Paremus pursue a different approach:

  • The runtime environment – a ‘Service Fabric’ – is model driven. Operations release and interact with a running Service via its model representation; this an SCA description of the System. Amongst other advantages, this shields the operations staff from unnecessary structural information.
  • The Service Fabric dynamically assembles each System resolving all modules AND context dependencies.
  • Resolution policies may be used to control various aspects of the dynamic resolution process for each System; this providing a higher level policy based hook into runtime dependency management.
  • The release artifacts are OSGi bundles and SCA System descriptions – conforming with the ‘re-use / release equivalence principle’.
  • The inter-relationship between all OSGi bundles and all Systems with the Service Fabric may be easily deduced.

The result is a run time which is extremely flexible, promotes code-reuse, whilst is significantly easier to manage than traditional environments. OSGi is an important element, but the use of a high level structural description used in conjunction with the model driven runtime are also essential elements of this story.

OSGi: The Value Proposition?

The short answer really is – “it depends on how you use it”!

Without a doubt, many will naively engage with OSGi, and will unwittingly increase operational management complexity beyond any benefits achieved by application modularization; see ‘OSGi here, there and everywhere’. However, for those that implement solutions that maximize flexibility and code-reuse, while minimizing management, OSGi’s value proposition is substantial; and the runtime used is a critical factor in realising these benefits.

How Substantial?

To date my only benchmark is provided by an informal analysis made by a group of architects at a tier 1 Investment Bank in 2008. They estimated the potential OPEX cost saving per production application, assuming that it were replaced with a Service Fabric equivalent; for the purpose of this blog one may equate Service Fabric to adaptive, distributed OSGi runtime.

Cost savings in terms of

  • application release efficiency.
  • ongoing change management,
  • fault diagnostics and resolution
  • efficiency savings through code re-use

were estimated. The final figure suggested a year on year OPEX saving of 60% per application. Somewhat surprised at the size of the estimate I’ve challenge the group on several occasions, each time the response was that the estimates were conservative.

To turn this into some real numbers – consider the following. A tier 1 investment bank may have as many as ~1000 applications; each application typically costing $1m per annum. Lets assume that only 30% of the applications are suitable for migrating to the new world – we’re still looking at a year on year saving of $200m. Migration costs are not included in this, but these are short term expenses. Likewise neither are the cost savings realized by replacing legacy JEE Application Server and middleware with the Service Fabric solution.

As always – ‘mileage may vary’ – but never the less, quite a value proposition for OSGi!

How to scale without lock-in

How do you scale your Spring DM or POJO applications without development framework lock-in?

Cloud centric composite applications promise to be more disruptive and more rewarding than either the move to client-server architectures in the early 1990’s, or web-services in the late 1990’s. A successful Private Cloud / Platform as a Service (PaaS) solution will provide the robust and agile foundations for an organization’s next generation of IT services.

These Cloud / PaaS runtimes will be in use for many years to come. During their lifetime they must therefore be able to host a changing ecosystem of software services, frameworks and languages.

Hence they must:

  • be able to seamlessly, and incrementally, evolve in response to changing business demands
  • at all cost, avoid locking an organization into any one specific development framework, programming language or middleware messaging product.

Want to know more? Read the new Paremus Service Fabric architecture paper which may be found here.

Cloud Computing – finally, FINALLY, someone gets it!

I’ve been really busy these last few months. So not had the time or inclination to post. Yet after reading Simon Crosby’s recent article
Whither the Venerable OS? I felt compelled to put pen to paper – or rather should that be fingers to keyboard.

Whilst a good read, the magic paragraph for me appears towards the end of Crosby’s article.

If IaaS clouds are the new server vendors, then the OS meets the server when the user runs an app in the cloud. That radically changes the business model for the OS vendor. But is the OS then simply a runtime for an application? The OS vendors would rightly quibble with that. The OS is today the locus of innovation in applications, and its rich primitives for the development and support of multi-tiered apps that span multiple servers on virtualized infrastructure is an indication of the future of the OS itself: Just as the abstraction of hardware has extended over multiple servers, so will the abstraction of the application support and runtime layers. Unlike my friends at VMware who view virtualization as the “New OS” I view the New OS as the trend toward an app isolation abstraction that is independent of hardware: the emergence of Platform as a Service.

Yes! Finally someone understands!

This is IMO exactly right, and the motivation behind the Paremus Service Fabric a path we started down in 2004!

OK, so we were a bit ahead of the industry innovation curve.

Anyway, related commentary on the internet suggests that Simon’s article validates VMwares acquisition of SpringSource. Well, I’d actually argue quite the opposite. Normal operating systems have been designed to run upon a fixed, unchanging resource landscapes; in contrast a “Cloud” operating system must be able to adapt, and must allow hosted applications to adjust, to a continuously churning set of IaaS resources. Quite simply, SpringSource do not have these capabilities in any shape or form.

However, I would disagree with the last point in Simon’s article. Having reviewed Microsoft’s Azure architecture, it seems to me no different from the plethora of Cloud/distributed ISV solutions. Microsoft’s Azure platform has a management/provisioning framework that fundamentally appears to be based on a Paxos like consensus algorithm; this no different from a variety of ISV’s that are using Apache Zookeeper as a registry / repository: All connection oriented architectures, all suffering with the same old problems!

Whilst such solutions are robust in a static environment, such approaches fail to account for the realities of complex system failures. Specifically, rather than isolated un-correlated failure events, failures in complex systems tend to be correlated and cascade! Cloud operating systems must address this fundamental reality and Microsoft are no further ahead than VMware or Google; indeed the race hasn’t even started yet! And the best defence against cascading failure in complex systems? Well that would be dynamic re-assembly driven by ‘eventual’ structural and data consistency.