Introducing the Paremus Service Fabric

This post is the first of a series of videos that will introduce and explore the capabilities of the Paremus Service Fabric; a distributed, highly modular, self-managing OSGi based platform.

This first episode demonstrates the creation of a Service Fabric in the Digital Ocean public Cloud environment. Having created a Service Fabric – we then review the Fabric’s management capabilities and deploy a simple example of a modular microservice based application.

This series will go onto explore various unique aspects of the Paremus Service Fabric, including the Fabric’s orchestration, management, adaption and recovery behaviours. The Fabric’s support for highly modular OSGi based applications, as well as supporting more traditional software artefacts via image deployment to Containers, will also be explored.

This and subsequent episodes are also available on the Paremus website.

Reality is Wondrously Complex

From the atoms in my coffee (Mmmm Union Roast – actually now my Harveys Elizabethan Ale 😉 — typos courtesy of the latter) to the interactions of individuals that collectively define the earths eco-system, and on a smaller scale human societies; reality is wondrously complex. To cope with this we create abstractions. The abstractions we create, and an understanding of the dependencies between these abstractions allow us – to some degree – make sense of reality. In some quite old posts (2008 no less!), I investigated the relationship between abstraction and complexity; see Complexity part I  & Complexity part II. Being a physicist by training I tend to regress back to this world view whenever I’m allowed ;-). However the arguments are generic and relevant to software and IT operational ‘complexity’.

Abstraction is not the same as virtualisation. Abstraction allows us to describe the essential characteristics of entities that we care about (coffee – ‘smooth’, ale – ‘malty’); without being troubled by their internal structures. Abstractions ‘simplify’. Abstractions encapsulate complexity. Virtualisation on the other hand attempts to create an alternative reality; one that is ideally as close as possible to the physical reality which it replaces and which also underpins it. As the purpose is to replicate reality, virtualisation does not encapsulate complexity.

As many now realise, having learn’t the hard way, virtualisation does not simplify.

Yet while we interpret the natural world through a self-consistent hierarchy of highly modular structural abstractions; we have been slow to adopt the structural abstractions required to address environmental complexity and the associated challenge of maintaining these environments.

Necessity is the Mother of Invention

The economic realities we face in 2013 will, I suggest, drive some long overdue changes. Fixation with virtualisation, and/or the latest and greatest ‘Cloud’ stack will wane, and organisations will regain some perspective. The ability to virtualise resource will remain a useful tool in the arsenal of enterprise technologies; but don’t expect too much of it. Virtualisation has failed to deliver the simplicity, operational savings and business agility promised by vendor marketing messages. In all honesty, virtualisation never could. The post virtualised world is more complex and risk prone than its physical predecessor; this the trade off made for increasing resource utilisation while shying away from addressing monolithic applications which are the root cause of most of our ills.

This is why OSGi is so important. An open industry specification, OSGi directly address the fundamental issue of structural modularity; encapsulating complexity and providing a powerful dependency management mechanism. Increasing not only Java; activities with the OSGi Alliance have started to address C/C++ and other languages.

The next generation of adaptive business platform, whether classified as private or public cloud environments; will need to be modular, will need to understand modularity applications, will need to manage all the forms of run time dependency, will need to be OSGi based.

Through our contributions to the OSGi Alliance, our sponsorship of the bndtools project and the OSGi Community and UK Forum, and above all the ongoing development of the Paremus Service Fabric – the industries first OSGi based Cloud runtime; from the developer IDE to the Cloud runtime, Paremus remain committed to delivering this vision. We hope that more of you join us in 2013.

In the meantime I’d like to wish you seasonal best wishes and peace, happiness and health for the coming year.

Richard & The Paremus Team.





Cloud: The Wasted Decade?

‘Cloud Computing’ remains the darling of the technology industry. ‘Cloud’, we are told, will reduce operational cost while increase business agility. ‘Cloud’, we are promised, will re-shape the way we think about IT services.


The problem is, I don’t understand how theses transformational effects are achieved? Current 1st-generation ‘Compute Clouds’ result from the synthesis of two underlying technology enablers.
  1. The adoption of coarse grained Service Orientation (WS-* or REST), allowed the decoupling of interconnected coarse grained business services. This is good.
  2. The use of the virtual machine image as the deployment artifact, allowed applications to be deployed ‘unchanged’ into alien runtime environments.
And therein lies the problem.


How can applications be more agile, when their internal composition remains unchanged? When they remain monolithic entities with tightly coupled constituent components? Given this, how are spiralling application maintenance costs countered? Exactly how is environmental complexity addressed? Are not 1st-generation public Cloud Compute offerings simple approaches to resource outsourcing? From a resource perspective more flexible than previous approaches; but potentially much more restrictive and ultimately dangerous as business logic is rigidly locked to sets of third party Cloud middleware API’s? Meanwhile, are 1st-generation private ‘Compute Clouds’ not just the latest marketing tag-line for traditional enterprise software deployment products?

After a decade of ‘industry innovation’ –  I fear Cloud Computing and resource Virtualization have collectively made the world more complex, our business systems more brittle and ultimately more costly to support.

In 2004 Paremus started a project to address what we considered to be the fundamental problem: how to build adaptive and robust application and runtime environments. Despite the ongoing Cambrian explosion of programming languages, Paremus did not believe the answer lay with the adoption of new or old programming language; e.g. Scala or Erlang! We also felt that the fashion to ‘Virtualize’ was probably – at best – a distraction; a position which places Paremus at odds with the rest of the industry.


Having a predominance of Physicists; Paremus reached to an area of scientific research known as ‘Complex Adaptive Systems’, hoping this would provide the architectural guidance we needed. It did and we distilled the following principles:
  • Change: Whether you like it or not its going to happen. Change must not be viewed as an inconvenient edge case or ‘Black Swan’ event.
  • Necessary Complexity: Systems that do interesting things are by their nature Complex.
  • Loose Coupling: While Complexity isn’t the issue, rigid structural coupling within Complex systems is fatal. Constituent parts must be loosely coupled.
  • Stigmergy: Don’t manage the details, rather set the overall objective and enable the System’s constituent components to respond appropriately: i.e Think Globally, Act locally.
  • Structural Modularity: In complex systems structural modularity always exists in a natural hierarchy.
  • Self Describing: Modular systems comprised from self-describing units (each unit describes it’s requirements and capabilities) may be dynamically assembled.
  • Diversity: Structural modularity enables diversity, which enables agility and evolvability.
  • Accidental Complexity: Structural modularity enables accidental complexity to be reduced over time.

Creating the illusion of a static environment – via ‘virtualization’ – for rigid, inflexible monolithic applications is a mistake. An easily consumed pain killer that only masked the fundamental illness.

The body of research was both compelling and conclusive.

Modularity is the key-stone. Modularity enables systems to be dynamically assembled and, if required, re-assembled from self-describing components.

Of all the languages available, only one (Java) had a sufficiently sophisticated modularity framework (OSGi) to meet our requirements. A set of open industry specifications created by the OSGi Alliance; OSGi provided many of the characteristics we required:
  • Modules (known as OSGi Bundles) are self-describing entities whose capabilities and requirements are well defined.
  • A powerful resolver capability allows OSGi based applications to be dynamically assembled during development or runtime. This not only includes bundle dependencies, but also life-cycle; and looking ahead service and environmental resource dependencies.
  • OSGi provides a well defined and industry standard approach to life-cycle and dynamic configuration.
  • Finally, OSGi provides a powerful micro-Services layer: this completing a natural structural hierarchy ( Class  ➔ Packages ➔ Bundles ➔ microServices ➔ traditional SOA ).
For these reasons Paremus adopted OSGi as a cornerstone of our product strategy in 2005. The result of these ongoing efforts is the Service Fabric: the industries first distributed OSGi runtime which enables:
  • Business applications to be dynamically assembled from re-usable components.
  • The dynamic assembly and configuration of these applications with respect to the runtime environment within which they find themselves.
  • Middleware services dynamically installed as required. No Cloud / PaaS API lock-in
  • Dynamic reconfiguration of all runtime elements (business logic, middleware and the platform itself) in response to unforeseen cascading resource failure.


Why now?
Whether a Financial Market crash or a paradigm shift in the IT industry; it is relatively easy to forecast that a change is inevitable; but notoriously difficult to predict precisely when this will occur. It is inevitable that next generation cloud environmentsPublic or Private will support dynamically assembly highly modular applications: these constructed from re-usable industry standard components – meaning OSGi. It is also inevitable that these environments will themselves be OSGi from the ground up. OSGi remains the only open industry standard, and so will be highly influential for those organisations that want to leverage a decade of expertise and avoid propriety vendor lock-in. The OSGi Alliance have recognised this trend and are extending specification activities to include Cloud Computing and generic Modularity and Life-Cycle management.
But is that time now?


For some organizations the ‘Boiling Frog’ analogy is, unfortunately, apt. Such organisations will continue to endure increasing OPEX costs; avoiding the upfront expense and perceived risk of re-orientating their IT strategy to focus on optimizing maintainability, agility and resilience. For such organisations – Cloud and PaaS strategies will consist of virtual machine management and the same old application software stack. Such organisations will see little real business benefit.


Increased interest in DevOps tooling (e.g. Puppet, Chief) demonstrate that other organisations have started to look beyond virtual machine centric strategies. While such DevOps tools start to address the issue of dynamic deployment and configuration; this is still with respect to monolithic applications, and so in essence no different from the IBM Tivoli and HP management tools of the 1990’s.


Meanwhile organisations that view technology as an enabler rather than a cost;  are starting to seriously look at development and runtime modularity. In part as a way to realise continuous release and agile development. For Java centric organisations this is driving the adoption of OSGi. One simple metric I can provide is that in the last 12 months Paremus has provided OSGi training to Government, Financial Services and Web companies in the US, Europe and Asia. In most of these organizations the relationship between OSGi, Agile Development and Cloud was a major interest.


Despite all our economic woes, is the time now? Quite possibly!

Packager – because OSGi > Java

If you are following the progress of Java 8, then you will be aware that Jigsaw may – yet again – fail to make the release train. There again, it might: ‘they’ seem just a little uncertain.

In stark contrast, since the appearance of JSR277 in 2005 (yes 2005!), the OSGi Alliance position has remained consistent and coherent. This position was restated yet again this week here. For those interested, several excellent supporting background articles may be found here, here and with Neil Bartlett’s usual humourous perspective here.

Paremus’ view is quite simple. For the record…

Java modularity should be based upon OSGi. Failure to do so will fracture the Java community and market and so damage to the future of the Java language.

However, this isn’t the main focus of this post: Just an interesting aside. 😉



And now for something completely different…

Paremus have been dynamically deploying OSGi based applications into ‘Cloud‘ environments for 7 years! Why would one use static virtual machine images, when your applications can be automatically assembled from re-usable components and automatically configured with respect to their runtime topology and the specifics of the runtime environment?

Surprisingly, this message has taken a while to seed; but seed it has: Paremus is nothing if not persistent. So 7 years on, and the Paremus Service Fabric represents the state-of-the-art ‘Cloud’ platform for those organisations that require something a little bit special: an adaptive, self-managing solution for their next generation of low maintenance modular OSGi based Java or Scala composite applications.


  • Some popular Java software projects remain monolithic and are unlikely to embrace OSGi anytime soon: e.g. Hadoop, ZooKeeper etc.
  • Many Service Fabric customers have a large portfolio of monolithic applications.
  • Some applications are not even Java / JVM based.

So if you want Service Fabric capabilities, but have one or more of the above challenges what  are we to do?

OSGi > Java

OSGi provides a set of Modularity, Life-cycle and Service capabilities that transcend any particular language.

The journey starts with the realisation that OSGi specifications encapsulate a number of remarkably useful language neutral concepts:

  1. Metadata for describing software artefacts; including flexible declaration of the artefact’s Capabilities and Requirements [environmental resources, Services and module dependencies], and a powerful semantic versioning scheme for  module dependencies.
  2. Leveraging this metadata; powerful and extremely flexible dynamic dependency resolution capabilities.
  3. A consistent approach to life-cycle: installing, starting, configuring, stopping and uninstalling software artefacts.
  4. A flexible configuration service for configuring installed artefacts.
  5. A powerful Service model, including pluggable RPC implementations for remote services that enable the decoupling of the service interactions from the underlying implementation language. For example, the Paremus Remote Service Administration implementation provides an Avro distribution provider.
While usually implemented in the Java language; these capabilities need not be limited to Java artefacts!

Introducing Packager

So, pulling these concepts together, Paremus created Packager: Packager allows traditional software artefact to be installed, deployed and dynamically configured just like OSGi bundles.

Neil Ellis, our lead developer on the Packager project, will post a series of articles over the next few weeks that will:

  • Introduce Packager concepts with a series of examples.
  • Investigate scenarios in which Packager may be of interest in non OSGi environments.
  • Contrast Packager against some of the other software deployment approaches that are available today.

If you are pursuing a modular cloud platform strategy and see the value in a consistent approach to runtime deployment and management: from the smallest of bundles to the LARGEST OF APPLICATIONS. If you understand the pitfalls and complexities caused by reliance on static bloated virtual machine images, and if you value open industry standards; then we think you will be interested in Packager.

So Stay Tuned … 😉

Reflections on GigaOM ‘Structure’ Event…

I’d like to start by thanking the GigaOM team once again for inviting me to participate in the panel session: ‘The Gap Between Applications and Infrastructure’. It is the first time I’ve attend a GigaOM ‘Structure’ event; and it proved a worthwhile experience.

There were many topics, but the re-occurring theme focused upon the on-going shift towards ‘Cloud Computing’:

  • Cloud ‘Business Models’: How ‘Agile’ organisations should off-loading non-critical IT.
  • SaaS Use Cases: Again selling the concept of  ‘Agile’ SaaS services for all those ‘Agile’ organisations.
  • Cloud API’s – which is more open? Which is closest to Amazon? The next great lock-in?
  • The role of the current dominant Cloud and Virtual Machine vendors and the rise of ‘Software defined Data Centres’.

All interesting topics. Yet I left the event further convinced that the  IT industry continues to be too ‘fashion driven’. Too much evangelism, too little substance, a near criminal overuse of the word ‘Agile’. Beneath the sound-bites, an absence of vision from the dominant Cloud and Virtual Machine vendors.

However there were some important gems.

The analogy between data and mass was touched upon in one panel session: this a fundament, real-world, constraint. Quite simply; the more data you amass, the more energy / cost is required to manage it and move it.  Frequently information has a time-to-live and only has value within a specific context. Hence aggregating vast amounts of data to drive centralised decision making processes is not necessarily the greatest of ideas: worth considering before embarking on your companies next ‘must have’ BigData project. For a good popular read which touches upon this area – try ‘Adapt‘.

So what does this imply for ‘BigData’ / ‘Cloud’ convergence?

  • If information can be extracted from data in flight, then do so! When possible use CEP rather than Hadoop!
  • It will frequently make sense to move processing to the data: not data to the processing.
  • Sometimes there is no alternative. A centralised approach which aggregates data from remote sources may be the only approach.
  • Analysis must be on an system by system basis. Data may flow from the Core to the Edge; or the data flow / interaction / may oscillate between core and edge.

It seems inevitable that data locality will be a primary forcing factor which will drive the shape of  next generation ‘Data Clouds‘ solutions.

A second panel session, ever so briefly, touched on the importance of system modularity. In an animated, amusing and good natured argument between three Cloud Infrastructure vendors, points were attempted to be scored via a modularity argument. One of the solutions consisted of multiple independent sub-projects rather than a monolithic whole. Unfortunately that is as far as the argument went, the fundamental importance of modularity, at all structural levels was not discussed. Meanwhile the Java vendors who were branded ‘monolithic’; didn’t realise that the most powerful modularisation framework in existence – the OSGi framework – could have provide them – if they were using it – with a devastating response to the criticism.

‘times they are a-Changin’

Back-stage conversations were more interesting. There was an increasing awareness of an imminent inflection point in the industry. Environmental complexity is rapidly increasing; this as business systems evolve towards an increasingly intertwined ecosystem of services, resources and highly modular / maintainable components. It was increasingly understood that in this new world order; runtime dependency management would be an essential enabler.

The idea that evolvable, adaptive Cloud platforms will be highly modular and must have powerful runtime dependency management. So enabling:

  • Applications to be dynamically assembled as required from fine grained components.
  • In a manner influenced by the characteristics of the local host environment
  • With middleware services provisioned as required.
Is not quite as Alien as when Paremus first started work upon this vision in 2005.

Service Fabric 1.8 release

As we’re about to release Service Fabric 1.8 the next few blogs will focus of some of the new capabilities introduced with 1.8 and we’ll also re-visit some the underlying ideas which have always underpin the Service Fabric.

Why modularity matters more than virtualization.

Ten years ago it all seemed so simple! Increase utilization of existing compute resource by hosting multiple virtual machines per physical platform; so consolidating applications onto fewer physical machines. As the virtual machine ‘shields’ its hosted application from the underlying physical environment, this is achieved without changes to the application.  As applications may now move runtime location without re-configuration; the idea of  virtual machine based ‘Cloud Computing’ was inevitable.

However, there are downsides.

Virtual machine image sprawl is now a well know phrase. If the virtual machine image is the unit of deployment; any software upgrade or configuration change, no matter how small, generates a new image. With a typical size of ~1 Gbyte  (see table 2 – – this soon adds up! Large virtual environments rapidly consume expensive on-line and off-line data storage resource. This in-turn has driven the use of de-duplication technologies. So increasing storage cost and / or increasing operational complexity.

Once constructed, virtual machine images must be propagated, perhaps many times across the network, to the physical hosts. Also, a small configuration change, which results in a new virtual machine image, which needs to be deployed to many nodes; can generate hundreds of Gbytes of network traffic.

When used as the unit of application deployment; virtualization increases operation complexity, and increases the consumption of expensive physical network and storage resources: both of which are ironically probably more expensive than compute resource which virtualization is attempting to optimize the use of.

We’re not finished!

  • Some categories of application simply cannot be de-coupled from the physical environment. Network latency is NOT zero, network bandwidth is NOT infinite and locality of data DOES matter.
  • Virtualization complicates and obscures runtime dependencies. If a physical node fails, which virtual machines were lost? More importantly, which services were lost, which business applications were operationally impacted? Companies are now building monitoring systems that attempt to unravel these questions: further operational band-aids!
  • Centralized VM management solutions introduce new and operationally significant points of failure.
  • As the operational complexity of virtual environments is higher than their physical predecessors; there is an increased the likelihood of catastrophic cascading failure caused by simple human error.

Feeling comfortable with your virtualization strategy?



For all these reasons, the idea of re-balancing critical production loads by dynamically migrating virtual machine images, is I suggest a popular Marketing Myth. While many analysts, software vendors, investors and end users continue to see virtualization as the ultimate silver bullet! They are, I believe, deluded.

The move to the ‘virtual enterprise’ has not been without significant cost. The move to the ‘virtual enterprise’ has not addressed fundamental IT issues. Nor will moving to public or private Cloud solutions based on virtualization.



And so the Story Evolves

Acknowledging these issues, a discernible trend has started in the Cloud Computing community. Increasingly the virtual machine image is rejected as the deployment artifact. Rather:

  • Virtual machines are used to partition physical resource.
  • Software is dynamically installed and configured.
  • In more sophisticated solutions, each resource target has a local agent which can act upon an installation command. This agent is able to:
    • Resolve runtime installation dependencies implied by the install command.
    • Download only the required software artifacts.
    • Install, configure and start required ‘services’.
  • Should subsequent re-configure or update commands be received; the agent will only download the changed software component, and / or re-configure artifacts that are already cached locally.

Sort of makes sense, doesn’t it!?

The Elephant in the Room

Dynamic deployment and configuration of software artifacts certainly makes more sense than pushing around virtual machine images. But have we actually addressed the fundamental issues that organisations face?

Not really.

As I’ve referenced on many occasions; Gartner research indicates that software maintenance dominates IT OPEX ( In comparison hardware costs are only ~10% of this OPEX figure.



“Our virtual cloud strategy sounds awesome: but what are the business benefits again??”


To put this into perspective; a large organisation’s annual IT OPEX may be ~$2 billion. Gartner’s research implies that, of this, $1.6 billion will be concerned with the management and maintenance of legacy applications. Indeed, one organization recently explained that each line of code changed in an application generated a downstream cost of >$1 million!

The issue isn’t resolved by virtualisation, nor Cloud. Indeed, software vendors, IT end users, IT investors and IT industry analysts have spent the last decade trying to optimize an increasingly insignificant part of the OPEX equation; while at the same time ignoring the elephant in the room.



Modular Systems are Maintainable Systems

If one is to address application maintainability – then modularity is THE fundamental requirement.

Luckily for organizations that are pre-dominantly Java based; help is at hand in the form of OSGi. OSGi specifications and corresponding OSGi implementations provide the industry standards upon which an organisation can being to modularise their portfolio of in-house Java applications; thereby containing the on-going cost of application maintenance. For further detail on the business benefits of OSGi based business systems; see

But what are the essential characteristics of a ‘modular Cloud runtime’: characteristics that will ensure a successful OSGi strategy? These may be simply deduced from the following principles:

  • The unit of maintenance and the unit of re-use are the same as the unit of deployment. Hence the unit of deployment should be the ‘bundle’.
  • Modularity reduces application maintenance for developers. However, this must not be at the expense of increasing runtime complexity for operations. The unit of operational management should be the ‘business system’.

Aren’t these requirements inconsistent? No, not if the ‘business system’ is dynamically assembled from the required ‘bundles’ at runtime. Operations: deploy, re-configure, scale and up-date ‘business systems’. The runtime internally maps these activities to the deployment and re-configuration of the required OSGi bundles.


In addition to these essential characteristics:

  • We would still like to leverage the resource partitioning capabilities of  virtual machines. But the virtual machine image is no-longer the unit of application deployment. As the runtime dynamically manages the mapping of services to virtual and physical resources; operations need no longer be concerned with this level of detail. From an operational perspective, it is sufficient to know that the ‘business system’ is functional and meeting its SLA.
  • Finally, it takes time to achieve a modular enterprise. It would be great if the runtime supported traditional software artifacts including WAR’s, simply POJO deployments and even non-Java artifacts!

Are there any runtime solutions that have such characteristics? Yes, one: the Paremus Service Fabric. A modular Cloud runtime – designed from the ground-up using OSGi; for OSGi based ‘business systems’. The Service Fabric’s unique adaptive, agile and  self-assembling runtime behaviors minimizes operational management whilst increasing service robustness. To get you started – the Service Fabric also supports non OSGi artefacts.

A final note: even Paremus occasionally bends to IT fashion :-/ Our imminent Service Fabric 1.8 release will support deployment of virtual machine images: though if you are reading this blog hopefully you will not be too interested in using that capability!

Dependencies – from Kernel to Cloud

The recent OSGi community event in London proved interesting from a couple of perspectives.

Dependencies – from Kernel to Cloud

Paremus engineers are no strangers to the idea of OSGi Cloud convergence. Having realised the potential of dynamic software assembly and resource abstraction in 2004, Paremus have had the opportunity to engineer a robust & architecturally coherent solution that delivers this vision.


Arthur C Clark’s third law: Any sufficiently advanced technology is indistinguishable from magic.

These concepts were presented by David Savage in his OSGi community talkOSGi & Private Clouds;  which included a live demonstration of the Service Fabric assembling a distributed trading system across a group of dynamically discovered compute resource.

Behind the scenes the demonstration involved  the use of the Paremus OBR resolver (Nimble) and  the new  Paremus RSA stack (configured to use SLP discovery and an asynchronous RMI data provider). Perhaps not magic, but very cool none the less 🙂

So I’d argue that Paremus have the OSGi Cloud piece of the puzzle; but what about “the Kernel”?

Well, one need only watch Jim Colson’s keynote , and the heavy referencing of the Apache Harmony JVM project (see Harmony), to see the potential for  OSGi in modularising the JVM.

Whilst the demonstration, consisting of the manual customisation of a JVM, was interesting; to my mind the full potential is only realised when such customisation is dynamic and in response to business requirements. To achieve this one must manage not only OSGi transitive dependencies, but also service dependencies and most importantly environmental runtime dependencies. Luckily, Paremus have these capabilities with Nimble; so perhaps in due course we’ll take a closer look at Harmony.

But then, why stop at Java?

Nimble is not just an OSGi resolver but a general dependency resolver, with extensible dependency types and deployable types. Debian is a well defined packaging standard which defines dependencies and installation from a Debian package repository is already standard practise. The Canonical ‘Ensemble‘ project is attempting exactly this: dynamic download of interdependent Debian artefacts and subsequent service provisioning.

So, are we witnessing “The Rise of the Stackless Stack” and the end of the industries infatuation with hordes of virtual machine images with static pre-baked software and configurations? I think so.

That said, contrast if you will this “Stackless Stack” vision with the Oracle’s big news:

Oracle Exalogic Elastic Cloud is the world’s first and only integrated middleware machine—a combined hardware and software offering designed to revolutionize datacenter consolidation.”

Expensive big iron, crammed to the brim with unwanted middleware. The contrast really couldn’t be greater!

Boiling the Frog

One frequently hears complaints about OSGi:

  • Its too difficult!
  • What is the immediate business benefit?

The first is simple to address. No excuses – use Nimble.

The second complaint has, in my opinion, more to do with human nature than technology. All too frequently instant gratification is selected over long term health. In a similar way, for many organisations, at each point in time, the operational pain isn’t sufficient to warrant fundamental change in behaviour: it just slowly keeps getting worse.

Hence the “Boiling the Frog” analogy.

The venture capital community understand this only too well.

Sell them the Aspirin!

And at all cost ignore the existing gordian knot inadvertently created from 20 years of IT quick fixes.

To construct Pieranski's knot, you fold a circular loop of rope and tie two multiple overhand knots in it. You then pass the end loops over the entangled domains. Then you shrink the rope until it is tight. With this structure, there is not enough rope to allow the manipulations necessary to unravel it.

To construct Pieranski’s knot, you fold a circular loop of rope and tie two multiple overhand knots in it. You then pass the end loops over the entangled domains. Then you shrink the rope until it is tight. With this structure, there is not enough rope to allow the manipulations necessary to unravel it.

Hence it was a sheer delight to listen to Eric Newcomer’s (chief architect at Credit Suisse) presentation on his bank’s considered approach to SOA and software modularisation. To summarise – Architecture is important, OSGi is important! No quick fixes, but an ongoing coordinated strategy that is intended to serve the business for the next decade.

I’m sure Credit Suisse will succeed, and I know Credit Suisse are not the only organisation starting down this path.

OSGi is a fantastic enabler, but its not magic. It can be a bitter medicine; but with a coherent OSGi / Cloud based strategy and disciplined implementation, these technologies will transform your operational cost base and your business.

All in all its been a busy Summer

Despite the challenges posed by the global economy, 2010 is proving to be an interesting year with a number of progressive organisations preparing for some form of OSGi migration. What is common across these organisations? From Paremus experiences, its not the markets they operate in, but rather the mind-sets of the in-house developers and systems architects.

This bodes well for OSGi adoption and the talented engineers that are driving this. Want to be worth you weight in platinum? Then understand the potential cost savings on offer to your organisation via a convergent private Cloud, OSGi and NoSQL strategy; effectively communicate this to your business, then deliver it!

Cloud is so much more than deploying virtual machine images.

Given these commercial indicators, its perhaps no surprise that ‘cloud’ rhetoric is also evolving: away from the mindless deployment of static virtual machine images, towards the dynamic assembly of distributed services. Indeed, the message above, one which Paremus have been attempting to convey since our earliest Service Fabric endeavours, received recent backing from none other than  VMware’s CEO Paul Maritz and CTO and Senior Vice President of Research and Development Steve Herrod. In a similar vein, Canonical’s Ensemble project recently announced that it foregoes virtual image distribution; Ensemble instead managing dependencies, deployment, and provisioning of applications on Ubuntu clouds.

Hence, in hindsight,  the OSGi Cloud workshop hosted at EclipseCon earlier this year and the follow-on Cloud RFP-133 activities, seem well timed. Whilst this is still a ‘work in progress’, the end result will hopefully be of great relevance to many organisations planning their first generation of private Cloud.

Back home, its been a very busy summer for Paremus!

We recently released version 1.6 of the Service Fabric; this an important internal milestone as several Nimble capabilities are now fully utilised. One new feature, dynamic update and roll-back of running Systems, is nicely demonstrated in the latest fractal example.

Demonstration of dynamic System update and roll-back capabilities introduced in the Service Fabric 1.6 release

Demonstration of dynamic System update and roll-back capabilities introduced in the Service Fabric 1.6 release

Meanwhile, if the OSGi Alliance Enterprise 4.2 specification Remote Service Administration (RSA) caught your attention/imagination, you may be interested in the new Paremus’ RSA  implementation which is nearing completion. Our RSA stack will support fully plug-able discovery, topology manager and data provider components, and Paremus will be supporting some interesting options in each of these areas. Alongside this, the team have also been hard at work on release 1.0 of the SIGIL Eclipse IDE plug-in and on a suite of enhanced Nimble capabilities.

Further details on each of these will be posted over the next few weeks, so stay tuned.

Finally, those of you who are able to attend the OSGi Community Event in London next week may be interested in attending David Savage’s OSGi & Private Cloud’s’ presentation. A number of the Paremus team will be at the presentation, so feel free to stop by and say hello!

Zen and the Art of Cloud Computing

Change is inevitable. Change is constant.

Benjamin Disraeli

I used the enclosed “Cloud Computing” slide set to summarize the Paremus position with respect to Cloud at the OSGi Cloud workshop (EclipseCon 2010 – organised by Peter Kriens).

The slides attempted to communicate the following fundamentals:

  • The Inevitability of Change
  • The strong fundamental relationship between Agility and Robustness
  • The need to Simplify through Abstraction

The implications being that:

  • Clouds’ need to be reactive runtimes; able to dynamically assemble and maintain themselves, as well as the composite business services which run upon them.
  • Modularity through OSGi is the key enabler.
View more presentations from mfrancis.

To explore these concepts further I will be publishing a series of short blog articles using the ‘Zen and the Art of Cloud Computing’ theme. Each article concerned with a specific idea, and how this is realized within the context of the Paremus Service Fabric.

Stay tuned….

Happy New Year

2009 was an interesting year for Paremus. Despite the bitter economic climate, OSGi finally captured the imagination of the software industry: with this, in conjunction with the intensifying ‘Cloud Computing‘ drum-beat, came an increased appreciation of capabilities and benefits brought by the Paremus Service Fabric.

In the closing months of 2009, Paremus released Nimble as a stand-alone product. A state-of-the-art dependency resolution engine,  Nimble’s mantra ‘Making Modularity Manageable’, struck a chord with many of you; and I’d like to thank you all for the extremely positive reception Nimble received.

For those interested in private, public and hybrid ‘Clouds’, the year also closed with a interesting series of Christmas posts (or lectures 🙂 ) by Chris Swan charting his successful attempt to create a hybrid cloud based ‘platform as a service’ by combining the Paremus Service Fabric with Amazon EC2.

So what can we expected in 2010? From a software industry perspective, be prepared for some fundamental shifts as the industry really starts to grapple with modularisation, cloud computing and when used in combination, what they really mean! As usual Kirk Knoernschild captures the moment in his latest post ‘A New Year’s Declaration

Whilst not wanting to give the game away, I can say that Paremus will be making a number of interesting announcements over the coming months. Those of you who are interested OSGi should keep a close eye on Nimble via the Nimble blog. Meanwhile, those interested in low latency computing, private Cloud Computing and innovative implementations of OSGi EEG standards work, should keep a watching brief on the Paremus Service Fabric.

Enough said.

Announced in 2005, the Newton project was, by several years, the industries first distributed SCA / OSGi distributed runtime platform! Since then, Newton has indirectly influenced industry standards, and many of our competitors roadmaps. However, since early 2008, our commercial Service Fabric has rapidly evolved beyond the initial concepts encapsulated within Newton. Given this, we start 2010 by announcing that that Newton will be archived and the CodeCauldron open source community closed.

In due course, a number of new OSS projects will be announced on the new Paremus community site. The CodeCauldron OSS experience and these future initiatives will be the subject of subsequent posts.

In the meantime, I wish Paremus customers, partners and friends all the very best for 2010.