Introducing the Paremus Service Fabric

This post is the first of a series of videos that will introduce and explore the capabilities of the Paremus Service Fabric; a distributed, highly modular, self-managing OSGi based platform.

This first episode demonstrates the creation of a Service Fabric in the Digital Ocean public Cloud environment. Having created a Service Fabric – we then review the Fabric’s management capabilities and deploy a simple example of a modular microservice based application.

This series will go onto explore various unique aspects of the Paremus Service Fabric, including the Fabric’s orchestration, management, adaption and recovery behaviours. The Fabric’s support for highly modular OSGi based applications, as well as supporting more traditional software artefacts via image deployment to Containers, will also be explored.

This and subsequent episodes are also available on the Paremus website.



The IT ‘fashion’ industry continues to drive us towards highly fragile environments. A fundamental change is needed and is long overdue.

The theme I continue to labour is that as IT becomes increasingly distributed and interconnected, we simply cannot continue down the same old software engineering path: see my recent posts including ‘Reality is Wondrously Complex’‘Cloud the Wasted Decade’‘Complex Systems and Failure’‘Why Modularity Matters more than  Virtualisation’.

9781846141560So I guess I am more than predisposed than most to agree with AntiFragile‘s. central message. Systems are either:

  • Fragile – their capabilities degrade with random change / environmental volatility.
  • Robust – their capabilities are immune to random change / environmental volatility.
  • AntiFragile – their capabilities improve courtesy of random change / environmental volatility.

Nassim Taleb (the author) correctly argues (IMO) that resource optimisation and prescriptive processes are hallmarks of: fragile systems, fragile environments and fragile organisations. Unfortunately most organisations’ current ‘private Cloud’ and / or ‘Virtualisation’ strategies are justified by – yes you guessed it – resource optimisation and prescriptive / simplistic / automation. There is every indication that the industries next ‘big’ fashion –  the ‘Software Defined Data Centre’ – will compound these errors.

To deal with the unforeseen / unforeseeable / a system needs spare resource; and as the unforeseen event is – well unforeseen – there will be no pre-determined prescriptive response.

In stark contrast AntiFragile systems incorporate a large degree of ‘optionality’; meaning the ability to exercise and re-exercise the most attractive option at each point in time.

By trying to understand why biological systems were adaptive and robust whereas software systems were / are / not; Paremus started mapping such concepts to software systems in 2003. As hoped for, this resulted in a set of basic principles which have continued to guide our efforts:

  • Robust software platforms must be based upon dynamic resource and service discovery, and subsequent rediscovery. Why? Things Change.
  • The platform should continually assess the runtime structures against the desired goal. How close is the current structure to this goal? How can the difference be further reduced / removed? Why? Things Change.
  • All runtime units must be loosely coupled. Why? Things change!
  • Finally, and most importantly. A platform should strive to be AntiFragile; to be able to harness new runtime ‘Optionality’. To achieve this the platform MUST be modular – it MUST be dynamically assembled from self-describing units – it MUST  leverage emergence and environmental feedback.

But I digress; back to the book.

Taleb rightly calls out the naivety of assuming that simple abstract mathematical models, i.e. naive reductionism, can be used to predict Complex System behaviour: this illusion helping prime both the Long Term Capital Markets meltdown in 1998 and the current – even more spectacular – banking crisis (see Fault Lines: How Hidden Fractures Still Threaten the World Economy).

Tabel also makes important observations concerning apparent causation. It is very difficult to predict which of the many adjacent possibilities may be chosen by a Complex System. An adjacent state will be ‘more fit’; but there is no guarantee that it is the ‘most fit’. However, looking backwards, a path of causality seems to exist and appears ‘obvious‘: i.e. ‘history‘ provides us with a simple narrative. Tabel also explains how history tends to be distorted (teleological interpretations) with the following example. Derivatives were traded (based on intuition / ‘gut’ ) long before the rocket scientists got involved. Yet documented history implies the inverse.

Taleb has clearly reached these insights through his own explorations; shaped by his experiences in the Financial Services industry. Yet while Taleb’s presentation is unique, the underlying themes have been previously explored in a number of different domains. In a recent post  ‘Complex Systems and Failure’ I briefly reviewed the book  ‘Adapt: Why Success Always Starts with Failure’. Here the author (Tim Harford) argues for diversity and the need for speculative experimentation and failure. Meanwhile, in ‘The Future of Money’ (2001 no less!), Bernard Lietaer argues, counter to single currency doctrines, that currency diversity is essential for achieving both stability and localised wealth creation. Stuart Kauffman in Investigations argues that Complex Systems evolve into adjacent possibilities. Kauffman suggests this behaviour explains the explosion in diversity seen in the natural biosphere; and over the last 3000 years the human economy. Meanwhile ‘Diversity and Complexity’ provides a quantitative exploration of why diversity is a fundamental requirement of robust, evolvable Systems. Finally, for those with a heavy theoretical bias – watch out for ‘Dynamics of Complex Systems: from Glasses to Evolution‘; which is due to be published this year.

I’m still reading AntiFragile, so I’ll reserve final judgement. However the concepts and arguments presented are important. There are some omissions. Taleb has not (so far) addressed the fundamental dilemma — my local AntiFragile behaviour today may result in our collective Fragile behaviour tomorrow; i.e. tragedy of the commons. To use one of Taleb’s analogies — rather than solely being concerned about winning a street fight, lets also work on the root issues that trigger such violence?!

Taleb does seem to like the street fighting analogy. Perhaps he’s a natural candidate for speaking at FITEclub in 2013 😉

UPDATE: Completed reading this evening. Contrary to the previous comment – in the closing section’s Taleb does directly address my concern – this via the ‘skin in the game‘ mechanism!

A comment w.r.t. scientific disciplines – unfortunately Physics isn’t quite the shinning beacon as portrayed. Fashions and employability are concerns even for theoretical physicists. If you want a career you were much more likely to succeed over the last decade if you work in String Theory! See Big Bang – or more seriously – The Trouble with Physics and/or Not Even Wrong). Luckily adherence to ‘String Theory’ (or ‘Loop Gravity’) is unlikely to cause the next economic collapse.

To conclude +1 for AntiFragile

Reality is Wondrously Complex

From the atoms in my coffee (Mmmm Union Roast – actually now my Harveys Elizabethan Ale 😉 — typos courtesy of the latter) to the interactions of individuals that collectively define the earths eco-system, and on a smaller scale human societies; reality is wondrously complex. To cope with this we create abstractions. The abstractions we create, and an understanding of the dependencies between these abstractions allow us – to some degree – make sense of reality. In some quite old posts (2008 no less!), I investigated the relationship between abstraction and complexity; see Complexity part I  & Complexity part II. Being a physicist by training I tend to regress back to this world view whenever I’m allowed ;-). However the arguments are generic and relevant to software and IT operational ‘complexity’.

Abstraction is not the same as virtualisation. Abstraction allows us to describe the essential characteristics of entities that we care about (coffee – ‘smooth’, ale – ‘malty’); without being troubled by their internal structures. Abstractions ‘simplify’. Abstractions encapsulate complexity. Virtualisation on the other hand attempts to create an alternative reality; one that is ideally as close as possible to the physical reality which it replaces and which also underpins it. As the purpose is to replicate reality, virtualisation does not encapsulate complexity.

As many now realise, having learn’t the hard way, virtualisation does not simplify.

Yet while we interpret the natural world through a self-consistent hierarchy of highly modular structural abstractions; we have been slow to adopt the structural abstractions required to address environmental complexity and the associated challenge of maintaining these environments.

Necessity is the Mother of Invention

The economic realities we face in 2013 will, I suggest, drive some long overdue changes. Fixation with virtualisation, and/or the latest and greatest ‘Cloud’ stack will wane, and organisations will regain some perspective. The ability to virtualise resource will remain a useful tool in the arsenal of enterprise technologies; but don’t expect too much of it. Virtualisation has failed to deliver the simplicity, operational savings and business agility promised by vendor marketing messages. In all honesty, virtualisation never could. The post virtualised world is more complex and risk prone than its physical predecessor; this the trade off made for increasing resource utilisation while shying away from addressing monolithic applications which are the root cause of most of our ills.

This is why OSGi is so important. An open industry specification, OSGi directly address the fundamental issue of structural modularity; encapsulating complexity and providing a powerful dependency management mechanism. Increasing not only Java; activities with the OSGi Alliance have started to address C/C++ and other languages.

The next generation of adaptive business platform, whether classified as private or public cloud environments; will need to be modular, will need to understand modularity applications, will need to manage all the forms of run time dependency, will need to be OSGi based.

Through our contributions to the OSGi Alliance, our sponsorship of the bndtools project and the OSGi Community and UK Forum, and above all the ongoing development of the Paremus Service Fabric – the industries first OSGi based Cloud runtime; from the developer IDE to the Cloud runtime, Paremus remain committed to delivering this vision. We hope that more of you join us in 2013.

In the meantime I’d like to wish you seasonal best wishes and peace, happiness and health for the coming year.

Richard & The Paremus Team.





Packager – because OSGi > Java

If you are following the progress of Java 8, then you will be aware that Jigsaw may – yet again – fail to make the release train. There again, it might: ‘they’ seem just a little uncertain.

In stark contrast, since the appearance of JSR277 in 2005 (yes 2005!), the OSGi Alliance position has remained consistent and coherent. This position was restated yet again this week here. For those interested, several excellent supporting background articles may be found here, here and with Neil Bartlett’s usual humourous perspective here.

Paremus’ view is quite simple. For the record…

Java modularity should be based upon OSGi. Failure to do so will fracture the Java community and market and so damage to the future of the Java language.

However, this isn’t the main focus of this post: Just an interesting aside. 😉



And now for something completely different…

Paremus have been dynamically deploying OSGi based applications into ‘Cloud‘ environments for 7 years! Why would one use static virtual machine images, when your applications can be automatically assembled from re-usable components and automatically configured with respect to their runtime topology and the specifics of the runtime environment?

Surprisingly, this message has taken a while to seed; but seed it has: Paremus is nothing if not persistent. So 7 years on, and the Paremus Service Fabric represents the state-of-the-art ‘Cloud’ platform for those organisations that require something a little bit special: an adaptive, self-managing solution for their next generation of low maintenance modular OSGi based Java or Scala composite applications.


  • Some popular Java software projects remain monolithic and are unlikely to embrace OSGi anytime soon: e.g. Hadoop, ZooKeeper etc.
  • Many Service Fabric customers have a large portfolio of monolithic applications.
  • Some applications are not even Java / JVM based.

So if you want Service Fabric capabilities, but have one or more of the above challenges what  are we to do?

OSGi > Java

OSGi provides a set of Modularity, Life-cycle and Service capabilities that transcend any particular language.

The journey starts with the realisation that OSGi specifications encapsulate a number of remarkably useful language neutral concepts:

  1. Metadata for describing software artefacts; including flexible declaration of the artefact’s Capabilities and Requirements [environmental resources, Services and module dependencies], and a powerful semantic versioning scheme for  module dependencies.
  2. Leveraging this metadata; powerful and extremely flexible dynamic dependency resolution capabilities.
  3. A consistent approach to life-cycle: installing, starting, configuring, stopping and uninstalling software artefacts.
  4. A flexible configuration service for configuring installed artefacts.
  5. A powerful Service model, including pluggable RPC implementations for remote services that enable the decoupling of the service interactions from the underlying implementation language. For example, the Paremus Remote Service Administration implementation provides an Avro distribution provider.
While usually implemented in the Java language; these capabilities need not be limited to Java artefacts!

Introducing Packager

So, pulling these concepts together, Paremus created Packager: Packager allows traditional software artefact to be installed, deployed and dynamically configured just like OSGi bundles.

Neil Ellis, our lead developer on the Packager project, will post a series of articles over the next few weeks that will:

  • Introduce Packager concepts with a series of examples.
  • Investigate scenarios in which Packager may be of interest in non OSGi environments.
  • Contrast Packager against some of the other software deployment approaches that are available today.

If you are pursuing a modular cloud platform strategy and see the value in a consistent approach to runtime deployment and management: from the smallest of bundles to the LARGEST OF APPLICATIONS. If you understand the pitfalls and complexities caused by reliance on static bloated virtual machine images, and if you value open industry standards; then we think you will be interested in Packager.

So Stay Tuned … 😉

Reflections on GigaOM ‘Structure’ Event…

I’d like to start by thanking the GigaOM team once again for inviting me to participate in the panel session: ‘The Gap Between Applications and Infrastructure’. It is the first time I’ve attend a GigaOM ‘Structure’ event; and it proved a worthwhile experience.

There were many topics, but the re-occurring theme focused upon the on-going shift towards ‘Cloud Computing’:

  • Cloud ‘Business Models’: How ‘Agile’ organisations should off-loading non-critical IT.
  • SaaS Use Cases: Again selling the concept of  ‘Agile’ SaaS services for all those ‘Agile’ organisations.
  • Cloud API’s – which is more open? Which is closest to Amazon? The next great lock-in?
  • The role of the current dominant Cloud and Virtual Machine vendors and the rise of ‘Software defined Data Centres’.

All interesting topics. Yet I left the event further convinced that the  IT industry continues to be too ‘fashion driven’. Too much evangelism, too little substance, a near criminal overuse of the word ‘Agile’. Beneath the sound-bites, an absence of vision from the dominant Cloud and Virtual Machine vendors.

However there were some important gems.

The analogy between data and mass was touched upon in one panel session: this a fundament, real-world, constraint. Quite simply; the more data you amass, the more energy / cost is required to manage it and move it.  Frequently information has a time-to-live and only has value within a specific context. Hence aggregating vast amounts of data to drive centralised decision making processes is not necessarily the greatest of ideas: worth considering before embarking on your companies next ‘must have’ BigData project. For a good popular read which touches upon this area – try ‘Adapt‘.

So what does this imply for ‘BigData’ / ‘Cloud’ convergence?

  • If information can be extracted from data in flight, then do so! When possible use CEP rather than Hadoop!
  • It will frequently make sense to move processing to the data: not data to the processing.
  • Sometimes there is no alternative. A centralised approach which aggregates data from remote sources may be the only approach.
  • Analysis must be on an system by system basis. Data may flow from the Core to the Edge; or the data flow / interaction / may oscillate between core and edge.

It seems inevitable that data locality will be a primary forcing factor which will drive the shape of  next generation ‘Data Clouds‘ solutions.

A second panel session, ever so briefly, touched on the importance of system modularity. In an animated, amusing and good natured argument between three Cloud Infrastructure vendors, points were attempted to be scored via a modularity argument. One of the solutions consisted of multiple independent sub-projects rather than a monolithic whole. Unfortunately that is as far as the argument went, the fundamental importance of modularity, at all structural levels was not discussed. Meanwhile the Java vendors who were branded ‘monolithic’; didn’t realise that the most powerful modularisation framework in existence – the OSGi framework – could have provide them – if they were using it – with a devastating response to the criticism.

‘times they are a-Changin’

Back-stage conversations were more interesting. There was an increasing awareness of an imminent inflection point in the industry. Environmental complexity is rapidly increasing; this as business systems evolve towards an increasingly intertwined ecosystem of services, resources and highly modular / maintainable components. It was increasingly understood that in this new world order; runtime dependency management would be an essential enabler.

The idea that evolvable, adaptive Cloud platforms will be highly modular and must have powerful runtime dependency management. So enabling:

  • Applications to be dynamically assembled as required from fine grained components.
  • In a manner influenced by the characteristics of the local host environment
  • With middleware services provisioned as required.
Is not quite as Alien as when Paremus first started work upon this vision in 2005.

Service Fabric 1.8 release

As we’re about to release Service Fabric 1.8 the next few blogs will focus of some of the new capabilities introduced with 1.8 and we’ll also re-visit some the underlying ideas which have always underpin the Service Fabric.

Breaking the Karmic Cycle

Paremus News

As you may have guessed from the syndication of Neil Bartlett’s blog; I’m very pleased to announce that Neil has joined Paremus and will be working on a number of interesting OSGi & Service Fabric based projects. Some of this work will appear in our imminent Service Fabric 1.8 release; but more on that in due course. In addition to Service Fabric and Nimble related work; Neil will be spending time advancing the open source BNDTools project and some interesting new capabilities are already in the pipe-line for the next release.

On a different note:  Whether you are a battle-hardened ‘OSGi master’, an enthusiastic ‘OSGi initiate’, or just curious; you’ll be interested in the OSGi Community Event in Darmstadt in September. Paremus will be attending this year: I’ll be presenting on Cloud and OSGi and explaining the importance of modularity and why the current generation of virtual machine based Cloud solutions have got it wrong! Should be fun and controversial – so come and heckle 😉


Breaking the Cycle

Whilst writing this post I came to the realisation that I was re-iterating a message I posted in 2008: see Impaled on the Horns of an OPEX Dilemma.

Perhaps,  if repeated frequently enough, the message concerning the fundamental importance of environmental modularity will seed, grow be heard over the general cacophony relating to the latest technology and software  fads.

With that hope –  once more…

As we  head towards the latest global economic downturn we see the all too predictable response from many organisations whose core business functions are heavily dependent upon technology.

The response goes something like this:

  • We have to deal with spiralling OPEX costs; the dominant component of which is ongoing application maintenance.
  • We have to do something now!
  • If we  reduce number of applications; clearly we reduce our maintenance burden!
  • We can also reduce headcount by pairing down local teams and off-shoring support.

These slash and burn programmes are subsequently implemented with varying degrees of success; BUT IN ALL CASES the remaining application portfolio is: just as difficult to manage; just as difficult to bug fix; just as difficult to functionally enhance; as ever. Perhaps more so as during the cost cutting exercise, with each round of redundancies, critical knowledge concerning the environment flows away from the organisation.

Business Agility and Service Availability Suffer

And then we have the economic upturn!

The business demand new functionality – NOW! The IT department cannot adapt or change the existing business systems: they are too monolithic, too high risk and core expertise has been lost. And so new generation of applications are created to meet the new business requirements. Developer and Operations headcount swell. The business systems are ever more complex, less flexible and OPEX is even higher; yet the business don’t care: they are making profit!

That is until the next down-turn.

Repeat this short term cyclic behaviours for ~15 to 20 years and you’ll end up with a fundamentally broken organisation.

Maintainable Systems are Modular Systems

There is a better way!

  • Acceptance: Realise that agile and cost effect environments take time to create; and will require some fundamental changes.
  • Enlightenment: Ignore industry fads (cloud, virtualisation, programming language of the day); maintainability is only achieved through ensuring modularity and agility at each layer of your environment.
  • If your business systems are Java based – then you are in luck. OSGi technology provides – if not the elusive silver bullet – the next best thing; an industry backed standards framework for modularity from which one can begin to realise these objectives.
  • In addition to these industry standards – you are going to need some help – so call us!
  • You need to start sometime; so may as well be today!

It many be unpalatable – but if organisations had implemented a long term modularisation strategy in 2008; those same organisations would be well placed; realising substantive and sustainable OPEX savings today.

Wonder if I’ll be repeating the same message in 2015?

EclipseCon 2011

The recent EclipseCon 2011 was the 4th consecutive EclipseCon conference in Santa Clara that we at Paremus have attended; and from my perspective, was the most exciting yet. One need look no further than Peter Krien’s Introduction to OSGi session to realise that the Eclipse community’s interest in OSGi continues its rapid growth.

Despite the thundering music (Thus Spake Zarathustra) from the CDO folks next door, which some assumed must be associated with my presentation, my talk on Cloud & OSGi was well attended and seemed to be well received; at least by the individuals that subsequently approached me to discuss this area in some depth.

The OSGi Alliance BoF was well attended with some interesting updates on the 4.3 release and ongoing work in the EEG group; the session concluded on an amusing note with some evil OSGi puzzles concocted by Peter Kriens and BJ Hargrave.

Whilst it was a shame that Neil Bartlett wasn’t able to attend this year, it was great to see Peter Kriens and David Savage rolling up their sleeves (metaphorically speaking) and walking interested parties through BNDtools and SIGIL tooling capabilities; explaining how these features will be combined in the very near future to create a powerful OSGi tooling solution.

For those of you that are interested, copies of the Paremus presentations (including the screen casts), will be posted here in the near future.


OSGi – The Business Drivers

Software modularization should not be considered in isolation, but rather seen within the context of several inter-related trends that have occurred over the last decade. Indeed, software modularity is merely the latest visible facet of a much larger and more fundamental technology shift, with an impact at least equivalent to the move from mainframes to client server computing in the 1980’s.

These related trends include:

  • Service Oriented Architecture (SOA) – Enabling previously rigidly coupled business systems with proprietary protocols to be expressed as “services” which can be accessed via common protocols. However the business systems themselves remained opaque and monolithic.
  • Cloud Computing –  Decoupling applications from the underlying compute resources upon which they run, allowing more efficient resource utilization and scaling.
  • Software Modularization – Most recently, replacing the opaque monolithic applications and stove-piped middleware with dynamically assembled alternatives composed from re-usable software components.

As commented on in “The Rise of the Stackless Stack” (see; these trends collectively shift the industry away from rigidly coupled, static, opaque environments; towards adaptive, loosely coupled systems which are dynamically assembled from well-defined software components that run across a fluid set of compute resource.



Modularity and Assembly – It’s not a new idea!

The concept of assembling a product from a set of well-defined re-usable components is not new. Indeed its roots can be traced back to at least 250BC with emperor Qin Shi Huang and his commissioning of the Terracotta Army (see Whatever the product, the driver for modularity and subsequent assembly is to; increase and maintain quality, reduce cost and increase output. The modern archetype for modularity and assembly is the automotive industry, where extensive use of standardization, modularity and assembly results in affordable automobiles.

Likewise, the computer industry already extensively uses hardware modularization in the form of standardised CPUs, memory and disk subsystems; leading to affordable computer hardware. These concepts are also well understood by software engineers. Extracts from include:

“Small programs are typified by being physically small in terms of their source code size, are easy to specify, quick to code and typically perform one task or a few very closely related tasks very well”,

… programming in the large, coding managers place emphasis on partitioning work into modules with precisely-specified interactions…”



“… one goal of programming in the large involves setting up modules that will not need altering in the event of probable changes. This is achieved by designing modules so they have high cohesion and loose coupling.”

Coarse grained modularization of business processes (via SOA) and the subsequent re-assembly (via BPEL) has been underway for some time, however the individual applications have remained monolithic and opaque. Further progress was not possible until a strong, widely endorsed, industry standard for enterprise software modularity was available. OSGiTM provides this modularization standard.

Since 1999 the OSGi Alliance (see has provided standards, reference implementations and guidance for modularization and assembly best practices. Recently, the OSGi Alliance has succeeded in:

  • Recruiting the majority of enterprise software vendors.
  • Encouraging those vendors to migrate their product portfolios to OSGi.
  • Producing the OSGi Enterprise standard which provides mechanisms to integrate OSGi based applications with legacy JEE & Spring.

Today OSGi is rapidly becoming a cornerstone for any successful business system re‑engineering effort.



The Dilemma

While software modularity and dynamic application assembly are inevitable industry trends; OSGi faces the usual adoption challenges: see Innovators Dilemma for an exploration of this theme –

‘The Business’ will never ask for OSGi based systems, it’s an implementation detail. Yet in the fullness of time ‘The Business’ will complain when the internal IT organization is seen as; too inefficient, too expensive and /or no longer agile enough to meet tactical, let alone strategic, business objectives.

Yet, from an IT management perspective revisiting a complex but working enterprise software stack is an immediate cost and quantifiable risk.[1] Something to be avoided. The alternative, a slow death seems almost preferable, or at least somebody else’s problem. As the environment degrades, make-shift tactical solutions are implemented to ease immediate pain points caused by strategic issues. This ongoing tactical ‘reactive’, behaviour results in an explosion in operational complexity and associated OPEX. The organisation eventually becomes paralysed by its own management’s inability to ‘grasp the nettle’ and identify and address the fundamental issues.

It is suggested that this organizational behavior parallels the well known ‘Tragedy of the Commons’ (see

Developer attitudes and/or their working environment may also act as an impediment:

  • Just Get It Done. At each point in time, the developer is pursuing, or is forced to pursue, the shortest / lowest effort / route to the immediate deliverable at the expense of longer term maintainability.
  • The Artisan: The software developer “knows better” and pursues his own approach in defiance of industry standards and accepted best practices.
  • The Luddite: The current opaque, undocumented spaghetti of code within the organisation ensures employment. Increased efficiency, self-describing dependencies and code re-use sound like a recipe for smaller more flexible development teams.

Each can be a powerful brake on change.

Yet history tells us, those who effectively embrace and leverage change, succeed. OSGi migration is neither free, nor in itself, a quick ‘silver bullet’. However OSGi is of fundamental strategic importance to any organisation whose core business involves the processing of information.



The Benefits

Quite simply, modularity localizes the impact of change, which directly leads to increased maintainability.

  • As long as the module boundaries don’t change, one can change the functionality of the module freely, without concern for breaking the wider system; i.e. the impact of any local change is prevented from leaking into the wider system.
  • Modules that perform a few, or a single function, are much easier to test exhaustively than an entire monolithic system.
  • Smaller pieces of the system can be independently versioned.
  • Specific knowledge is only required for the particular module being worked upon, along with its relationship to other modules in the system; other modules can be treated as “black boxes” that perform specific functions, without worrying about how they perform them.

The large scale structure of the migrated application, which in all likelihood was previously unknown, is now completely defined by the dependencies between the set of versioned modules. Hence:

  • It is understood which modules are and are not required!
  • By replacing or re-wiring modules, the behavior of a composite system may be rapidly changed.

From an ongoing maintenance perspective it is now possible to re-factor individual modules, or even the overall system, to systematically drive out accidental complexity (see and so contain, or even reverse, design rot.

While these arguments are understood, many organizations require demonstrable real world examples. The challenge is that those organisations that are first movers have no real incentive to act as references, thereby losing competitive advantage.

For several years, Paremus have been working with a number of organizations who are at various stages of OSGi migration. While Paremus cannot discuss specific details, or disclose the organizations by name, we are able to reference some common themes that demonstrate real world benefits for OSGi in general, and the Paremus Service Fabric in particular.



Resource Utilisation

When working with traditional applications, a lack of information concerning required libraries results in developers having to load every possible library into their IDE. From experience, Paremus have seen this drive memory requirements as high as 40GB per developer desktop. However, once dependencies are understood, and mechanisms are in place so that only the required components are loaded, Paremus have also seen the number of artefacts reduced by an order of magnitude with corresponding machine memory savings.

For an organization with several hundred developers the saving is considerable. The saving in reduced memory footprint in Production is of course correspondingly larger.



Developer Efficiency

Developing against a monolithic application requires the developer to work and test against the complete code-base. Yet for large applications it may not be possible to test changes in the local IDE, as compile times may be hours. As a result, developers are forced to rely upon unit and integration tests that run during the nightly build cycle. The result is that the whole bug detection and rectification cycle can take days, with an increased likelihood that some issues are not found and leak into production.

In contrast, rapid testing of OSGi modules is easily achieved in the developer IDE, and Paremus have seen this test / fix cycle reduced from days to minutes.

A modular system also lends itself well to many hands being involved in its development and maintenance. It’s not necessary to understand the whole system inside-out, each individual can independently work on  small well-defined and decoupled modules. This directly translates to Increased project delivery success rates as smaller, well-contained projects have a higher success rate than larger and more poorly constrained projects.



Definitive Dependencies

Most modern build systems allow build-time dependencies to be specified and obtained from various repositories. However, different types of  build system repository specify dependencies with different formats; e.g. pom.xml, ivy.xml.

OSGi provides runtime dependency meta-data in its Require-Bundle and Import-Package bundle manifest headers. This provides definitive  industry standard dependency information that can be consumed regardless of the build system you use. Hence OSGi decouples your runtime from the source build systems; thereby avoiding meta-data lock-in and allowing  different types of build repository to co-exist.

In addition, OSGi supports package-level dependencies. These are finer-grained than the artefact-level dependencies supported by most build systems and can be programmatically determined directly from source code using tools such as Apache Felix Sigil . This avoids the errors that occur when trying to manually maintain dependency data.

Finally, OSGi supports its own repository API: OBR. A current priority for the OSGi Alliance EEG working group; OBR allows searching for package-level dependencies as well as the artefact-level dependencies used by other repositories.



Production Stability, Availability & Agility

Unless you happen to own applications that never change; stability, availability and agility are closely related concerns. The more agile the business service, the easier it is to change from one well-defined state to the next. This may be to introduce new business functionality, apply a patch, or roll back to a previously good version.

The internal structure of monolithic opaque applications is poorly understood; hence upgrades are complex and high risk. This, coupled with the long developer diagnostic / fix cycle, can result in repeated production outages and instability that spans several working days. Operations typically respond by maintaining a number of isolated horizontal silos; attempting to ensure service availability by releasing new software one silo at a time. This issue isn’t just common, its almost universal.

In contrast; with an OSGi-based runtime like the Paremus Service Fabric, each application is self‑describing; meaning that the application structure is now fully described in-terms of dependencies between versioned modules. A system may be deployed in seconds. A running system may be upgraded on-the-fly (i.e. within seconds); and may be returned to a previous well known-state just as rapidly.



Operational Risk & Governance

The implication for operational risk should be immediately apparent:

  • From regulatory and business continuity perspectives, the structure of an application is precisely known at each point-in-time; allowing all versions of the application to be rapidly re-constituted.
  • Key structural information is no longer locked within key members of staff. In principle the organisation can employee any OSGi literate developer / systems architect who can rapidly navigate the structure of the organisation’s software systems.

From a governance perspective, it is now simply task to rapidly answer the following types of question:

  • Which software license types are used within which production applications?
  • Which production applications use third party modules with an identified security vulnerability?

This information is readily available courtesy of the metadata embedded in each OSGi module and the dynamic deployment & assembly mechanisms provided by the OSGi runtime environment.




Anne Thomas Manes (Gartner) estimates that ongoing maintenance accounts for 92% of the total lifetime cost (TCO) of each application. Whilst hardware accounts for <10% of TCO, software maintenance accounts for ~70% of TCO; the remainder being the initial cost of developing the application; see slides 9 & 10 – SOA Symposium: Berlin, October 2010.

Given this, the current fashion for virtual machine based Cloud computing seems somewhat perplexing. The deployment of traditional opaque software stacks as virtual machine images does  NOTHING to address the issue of application maintainability.

Closer to home, a group of Financial Services engineers recently attempted to quantify the potential return realised by migrating their production environment to the Paremus Service Fabric. They conclude that OPEX savings of 60% were possible. While Paremus do not know the details of this analysis, it is likely that the following three contributing factors were considered.

  • The Paremus Service Fabric is a cost effective replacement for Application Servers, Compute Grids, CMDBs and provisioning tooling and requires less operational resource to manage it.
  • The Paremus Service Fabric, being a Private Cloud, achieves the resource utilisation and efficiency savings alluded to by many virtualisation vendors, but without the management overhead and operational risk associated with ‘virtual machine sprawl’.
  • However, given the Gartner TCO analysis, the bulk of the identified OPEX saving most likely results from the ongoing maintainability of OSGi based business applications running upon the Service Fabric runtime.

A large Financial Service organisations may have 1000+ applications, each with an average annual running cost of ~$1,000,000. This equates to an potential annual saving of $600,000,000! A significant medium term bottom line saving that surely warrants investing in a multi-year OSGi based application transformation program?



Migration Strategy

Hopefully a compelling set of arguments for adopting OSGi have been presented, but how do you actually migrate an organisation with a decade of legacy applications to a new OSGi based world?

Organisations considering this invariably start with the following questions:

  • How do we determine and then untangle dependencies in our current environment?
  • How do we move to an OSGi-centric build/test/release cycle within minimum impact on the majority of developers?
  • What level of modularity / granularity should we pursue?

In response, Paremus advise an iterative approach; the precise details dependent upon each organisation’s starting point and business objectives.

Stage I


1. Assemble a small high-caliber team of engineers with appropriate OSGi / modularity skills. Ensure that this team have senior management backing and representation.

2. Set up tools to determine and monitor dependencies within the organisation’s existing code base. Provide automated reporting, remove superfluous, and fix unsatisfied, dependencies.

At this early stage, the organization may have already achieved as much as an order of magnitude simplification in the code base dependencies. This in itself will improve developer productivity by reducing compile times and help increase the success of production releases.

Stage II – Tooling & Metadata


3. Review organisation’s standards for IDE tooling. Does current tooling support OSGi metadata and enable simple management of exposed dependencies? Review organisation’s current repository standards; will these support the OSGi Alliance OBR standard? If required, select new tooling.

4. Set up OSGi metadata for all organisations projects; this to be maintained by project developers. This metadata need not yet be used at runtime, but it can be used during the build process to monitor the progress of the modularisation effort.


It should now be possible to run OSGi and standard Java variants of an application side-by-side. This allows migration to progress without creating large parallel branches in source control, which are difficult to subsequently re-merge. CAPEX savings may be realised at this point, as only the required artifacts will be loaded into the developers IDE, decreasing the amount of resources required and increasing testability further.

Stage III – Runtime

5. Select candidate applications for migration based upon agility and re-usability considerations:

  • A set of applications that share a high degree of functionality / code and require frequent functional updates are excellent candidates for early migration.
  • A standalone application, which seldom, if ever changes, and shares little or no functionality with other applications, is a very low priority – and may never be migrated.

6. Create working runtime bundles using existing libraries as the level of granularity.

7. Create integration tests for use during and after migration to assert that parallel development streams do not break any modularisation efforts as the code is migrated.

8. Test OSGi version of candidate application in an OSGi runtime environment; e.g. Paremus Service Fabric.

9. Create integration tests to ensure fixes for hidden gotchas are caught during development rather than in production; i.e. Development and UAT fabrics.

The organization should now be in a good position to deploy OSGi based applications to production and accelerate OSGi migration. The new development, build, test. release lifecycle will enable considerable improvements in developer productivity via further reductions in time to run unit and integration tests.

Stage IV – Iterate and Reward

10. For each application take an iterative approach to modularisation; break down the deliverable into smaller modules (improving modular boundaries), test, deploy, release to production. Then re-visit modularity, break down further if appropriate, test, release.

11. For each migrated application, using the dependency information now available, publish ‘composition reports’; listing degree of re-used of in-house modules, certified third part modules and alerting on uses of non-certified modules.

12. The modularisation process will now be running across many candidate applications in parallel; during which the core OSGi team will continue to advise each participating application group on appropriate levels of modularity and opportunities for re-use.

13. Reward application teams; not just for meeting an initial delivery objectives, but also on:

  • Use of agreed third party OSGi libraries, where appropriate.
  • Achieve a high degree of re-use in a re-factored applications.
  • Delivering and maintaining in-house modules which are re-used in many in-house applications.

The ‘composition reports’ and associated incentives provide the development teams with powerful feedback mechanisms.

In this manner, the organisation typically starts with an isolated application, building the required skills & processes. Driven by initial successes and cost savings, adoption naturally flows from this initial seed point through the organisation.



To Conclude

Whether OSGi is destined to be the next IT ‘fashion bubble’ (once the current hysteria on ‘Cloud’ has waned) or it will grow organically via initial adoption by the most sophisticated IT organizations, is unclear.

However, the software industry is a manufacturing industry; no more and no less than manufacturers of disk drives, steel or automobiles.  While the raw materials are low cost, the ongoing effort required to craft and maintain flexible, high quality software, is not.

Necessity will drive the software industry, and those organisations with large in-house development teams, towards modularization, dynamic application assembly and so OSGi.

[1]The author has had direct experience of this behaviour in an enterprise environment.

A backup solution that was initially designed for a hand full of servers was nearing collapse. Each time a new backup server was added to address over-runs the reduction in backup time was less than the 1/n expected as loads could not be exactly balanced across backup real estate. The situation degraded to the point where the backups extended to a full 24 hours. The correct solution was to refresh the enterprise network and consolidate backup servers into centralized high speed silos. Yet, despite the operational risk, some management refused to sign off on what was perceived to be a high risk project. Only when no ‘last’ silver bullet could be found was sign-off achieved to progress the correct strategic solution. The situation had by then become so serious that this solution had to be implemented as rapidly as possible.

Dependencies – from Kernel to Cloud

The recent OSGi community event in London proved interesting from a couple of perspectives.

Dependencies – from Kernel to Cloud

Paremus engineers are no strangers to the idea of OSGi Cloud convergence. Having realised the potential of dynamic software assembly and resource abstraction in 2004, Paremus have had the opportunity to engineer a robust & architecturally coherent solution that delivers this vision.


Arthur C Clark’s third law: Any sufficiently advanced technology is indistinguishable from magic.

These concepts were presented by David Savage in his OSGi community talkOSGi & Private Clouds;  which included a live demonstration of the Service Fabric assembling a distributed trading system across a group of dynamically discovered compute resource.

Behind the scenes the demonstration involved  the use of the Paremus OBR resolver (Nimble) and  the new  Paremus RSA stack (configured to use SLP discovery and an asynchronous RMI data provider). Perhaps not magic, but very cool none the less 🙂

So I’d argue that Paremus have the OSGi Cloud piece of the puzzle; but what about “the Kernel”?

Well, one need only watch Jim Colson’s keynote , and the heavy referencing of the Apache Harmony JVM project (see Harmony), to see the potential for  OSGi in modularising the JVM.

Whilst the demonstration, consisting of the manual customisation of a JVM, was interesting; to my mind the full potential is only realised when such customisation is dynamic and in response to business requirements. To achieve this one must manage not only OSGi transitive dependencies, but also service dependencies and most importantly environmental runtime dependencies. Luckily, Paremus have these capabilities with Nimble; so perhaps in due course we’ll take a closer look at Harmony.

But then, why stop at Java?

Nimble is not just an OSGi resolver but a general dependency resolver, with extensible dependency types and deployable types. Debian is a well defined packaging standard which defines dependencies and installation from a Debian package repository is already standard practise. The Canonical ‘Ensemble‘ project is attempting exactly this: dynamic download of interdependent Debian artefacts and subsequent service provisioning.

So, are we witnessing “The Rise of the Stackless Stack” and the end of the industries infatuation with hordes of virtual machine images with static pre-baked software and configurations? I think so.

That said, contrast if you will this “Stackless Stack” vision with the Oracle’s big news:

Oracle Exalogic Elastic Cloud is the world’s first and only integrated middleware machine—a combined hardware and software offering designed to revolutionize datacenter consolidation.”

Expensive big iron, crammed to the brim with unwanted middleware. The contrast really couldn’t be greater!

Boiling the Frog

One frequently hears complaints about OSGi:

  • Its too difficult!
  • What is the immediate business benefit?

The first is simple to address. No excuses – use Nimble.

The second complaint has, in my opinion, more to do with human nature than technology. All too frequently instant gratification is selected over long term health. In a similar way, for many organisations, at each point in time, the operational pain isn’t sufficient to warrant fundamental change in behaviour: it just slowly keeps getting worse.

Hence the “Boiling the Frog” analogy.

The venture capital community understand this only too well.

Sell them the Aspirin!

And at all cost ignore the existing gordian knot inadvertently created from 20 years of IT quick fixes.

To construct Pieranski's knot, you fold a circular loop of rope and tie two multiple overhand knots in it. You then pass the end loops over the entangled domains. Then you shrink the rope until it is tight. With this structure, there is not enough rope to allow the manipulations necessary to unravel it.

To construct Pieranski’s knot, you fold a circular loop of rope and tie two multiple overhand knots in it. You then pass the end loops over the entangled domains. Then you shrink the rope until it is tight. With this structure, there is not enough rope to allow the manipulations necessary to unravel it.

Hence it was a sheer delight to listen to Eric Newcomer’s (chief architect at Credit Suisse) presentation on his bank’s considered approach to SOA and software modularisation. To summarise – Architecture is important, OSGi is important! No quick fixes, but an ongoing coordinated strategy that is intended to serve the business for the next decade.

I’m sure Credit Suisse will succeed, and I know Credit Suisse are not the only organisation starting down this path.

OSGi is a fantastic enabler, but its not magic. It can be a bitter medicine; but with a coherent OSGi / Cloud based strategy and disciplined implementation, these technologies will transform your operational cost base and your business.

All in all its been a busy Summer

Despite the challenges posed by the global economy, 2010 is proving to be an interesting year with a number of progressive organisations preparing for some form of OSGi migration. What is common across these organisations? From Paremus experiences, its not the markets they operate in, but rather the mind-sets of the in-house developers and systems architects.

This bodes well for OSGi adoption and the talented engineers that are driving this. Want to be worth you weight in platinum? Then understand the potential cost savings on offer to your organisation via a convergent private Cloud, OSGi and NoSQL strategy; effectively communicate this to your business, then deliver it!

Cloud is so much more than deploying virtual machine images.

Given these commercial indicators, its perhaps no surprise that ‘cloud’ rhetoric is also evolving: away from the mindless deployment of static virtual machine images, towards the dynamic assembly of distributed services. Indeed, the message above, one which Paremus have been attempting to convey since our earliest Service Fabric endeavours, received recent backing from none other than  VMware’s CEO Paul Maritz and CTO and Senior Vice President of Research and Development Steve Herrod. In a similar vein, Canonical’s Ensemble project recently announced that it foregoes virtual image distribution; Ensemble instead managing dependencies, deployment, and provisioning of applications on Ubuntu clouds.

Hence, in hindsight,  the OSGi Cloud workshop hosted at EclipseCon earlier this year and the follow-on Cloud RFP-133 activities, seem well timed. Whilst this is still a ‘work in progress’, the end result will hopefully be of great relevance to many organisations planning their first generation of private Cloud.

Back home, its been a very busy summer for Paremus!

We recently released version 1.6 of the Service Fabric; this an important internal milestone as several Nimble capabilities are now fully utilised. One new feature, dynamic update and roll-back of running Systems, is nicely demonstrated in the latest fractal example.

Demonstration of dynamic System update and roll-back capabilities introduced in the Service Fabric 1.6 release

Demonstration of dynamic System update and roll-back capabilities introduced in the Service Fabric 1.6 release

Meanwhile, if the OSGi Alliance Enterprise 4.2 specification Remote Service Administration (RSA) caught your attention/imagination, you may be interested in the new Paremus’ RSA  implementation which is nearing completion. Our RSA stack will support fully plug-able discovery, topology manager and data provider components, and Paremus will be supporting some interesting options in each of these areas. Alongside this, the team have also been hard at work on release 1.0 of the SIGIL Eclipse IDE plug-in and on a suite of enhanced Nimble capabilities.

Further details on each of these will be posted over the next few weeks, so stay tuned.

Finally, those of you who are able to attend the OSGi Community Event in London next week may be interested in attending David Savage’s OSGi & Private Cloud’s’ presentation. A number of the Paremus team will be at the presentation, so feel free to stop by and say hello!