`OSGi Enabled` – This is not a statement, this is an ongoing commitment to you…

A Product or Service bearing the “OSGi Enabled” ingredient mark signifies an ongoing commitment: a commitment to maintainability, flexibility, evolvability; a commitment to longevity.


The US DARPA agency states that the longevity of modern software systems is orders of magnitude less than other human built artefacts. In this, DARPA rightly recognise the need for software systems to be adaptive to unforeseen changes in their runtime environments. Via the BRASS initiative, DARPA hope to encourage the IT industry to think about this bigger picture.

If you accept DARPA’s concerns; then the following conclusion is logically inescapable.

Adaptive, Evolvable Systems need to be Modular Systems. By modular – we mean internal structural modularity. An opaque software BLOB deployed as a static Virtual Machine or Container Image is not modular; and does not qualify. Also, to allow for interoperability and interchangeability, the approach to modularity must be based on open industry standards: standards that themselves must have demonstrable longevity.

DARPA – the answer to BRASS is OSGi™.

Since its introduction in the late 1990’s, OSGi – “the modularity system for Java” has slowly diffused into just about every corner of the IT industry. Originally create to address the resource and maintainability requirements for the Smart Home and IoT (circa 2000),  OSGi has evolved and adapted and is today, in 2016, still uniquely positioned to address the issues and challenges that exist for the IoT,  M2M, Smart Home, Smart Cities, Smart Agriculture markets.

Yes; an industry standard conceived in the late 1990’s is today directly addressing the commercial issues slowly being uncovered by the latest wave of IoT, including but not limited to: Analytics and Behaviour everywhere (Edge, Fog, Mist, whatever); the management of multitudes of heterogeneous and continuously changing Edge devices; secure software deployment to those devices; deploying only what is necessary and knowing the pedigree / providence of each atomic unit of software deployed.

This week the OSGi Alliance launched the OSGi™ Enabled ingredient mark. Probably long overdue; but in a decade’s time this may well be one of the most recognised ingredient marks powering the Adaptive / Pervasive Internet of Things. It will need to be if DARPA’s challenge is to be addressed.

So what does “OSGi Enabled” signify?

The Product or Service bearing the “OSGi Enabled” ingredient mark has been engineered with maintainability, adaptability and evolvability as design priorities.

The “OSGi Enabled” mark also indicates that the solution provider realises that open industry standards are both required, and that they do not just spontaneous appear. Effort and resources are required to create such standards; and the vendor, by supporting the OSGi Alliance, is actively playing their part in that process.





Paremus have been an OSGi Alliance Strategic Member for many years, and we have contributed in many areas across the organisation: President (2011-2013), Treasurer (2014-current), VP Marketing (2015-current), IoT co Chair (2015-current). Paremus have also driven or contributed to many OSGi Specifications.

The Paremus Service Fabric – a highly modular orchestration platform for Enterprise or IoT environment – has been built from the ground up leveraging OSGi. In addition Paremus extend OSGi principles beyond Java with Packager (see https://docs.paremus.com/display/SF113) – to support Microservices or traditional software components written in an language.






Hence I’m delighted that Paremus Service Fabric is eligible to use the “OSGi Enabled” ingredient mark and all that this signifies.

Dr Richard Nicholson. Paremus founder & CEO, OSGi Board Member.

The Paremus Service Fabric has no Opinions…

The emergence of the ‘Opinionated’ Microservices Platform continues apace.

We are told that business systems should be REST/Microservice based. We are told that containers should be used everywhere and container images are the deployment artefact du jour: I mean lets face it Virtual Machine Images are so last year! And of-course Reactive is the new cool.

A depressing state of affairs. These messages are at best half-truths. At worst – the IT industry’s ongoing equivalent of Snake Oil.

So …

  • Microservices: As long as used appropriately – no complaints. However, REST based Microservices are a popular corner case one which should be addressable by a more general and coherent approach to system modularity!
  • Reactive Systems: These concepts are important. However a system cannot be truly reactive if is not modular in composition. “Containers + REST” != Reactive Systems.
  • Containers: Nothing new here – been around since Mainframes LPAR’s and Solaris Zones. Great for isolation and partitioning resources! However deployment of applications via inter-related sets of opaque container images is just about the worst idea imaginable. Today’s issues with operational complexity and VMI sprawl will soon be magnified 1000 fold. And all caused by the current “containers are the answer” dogma.

Paremus has been providing a reactive distributed OSGi platform since 2006 – before Amazon, and before SpringSource acquired Cloud Foundry. Yet today in 2016 the Paremus Service Fabric is still unique. Well of-course the name “Service Fabric” is no longer unique – these days Microsoft also have a Service Fabric. However the Paremus Service Fabric is still unique in most other respects.

Our Fabric is an Agnostic. No API lock-in, no application architectural restrictions. The Service Fabric is also middleware neutral. Whether Java / OSGi or polygot / Docker – your application architecture is your concern. Whatever your ‘Opinion’ today, or your ‘Opinion’ tomorrow the Service Fabric will deploy, configure and maintain your business systems.

Paremus Service Fabric’s raison d’être:

  • To be as operationally simple as possible: To be manageable by mere mortals – not some new genetic breed of DevOps.
  • To be as robust as possible: No smoke and mirrors, no hidden points of failure. And because high availability requires fast recovery: install, repair and recreate operational tasks must be as simple as possible.
  • To be as flexible as possible: Because there is no one single “ANSWER”. Business requirements change and so optionality is always a requirement. The Fabric can orchestrate REST based Microservices – but also has sophisticated asynchronous, messaging, eventing and streaming options.
  • To be based on solid industry standards and not some hip open source project that is currently fashionable.

The Paremus Service Fabric is built from the ground up leveraging OSGi / Java. We engineering the Fabric with Java / OSGi – not because the technologies are ‘cool’ – but because we require Longevity and Agility from our platform.

  • To achieve Longevity & Agility the Service Fabric must be  Evolvable (DARPA Brass).
  • To be evolvable the Service Fabric must be modular at all structural levels.

So the Paremus Service Fabric is actually highly Opinionated, my engineers are even more Opinionated, but only on the inside. To the outside world the Paremus Service Fabric and Paremus are Agnostic – our role is to simply serve.

This philosophy made sense in 2006 and I believe it still makes sense today in 2016. Indeed – the Paremus Service Fabric remains the only standards based modular evolvable runtime platform.

And yes:

  • The Service Fabric supports ‘Microservices’.
  • The Service Fabric is highly Reactive.
  • The Service Fabric can orchestrate sets of Microservices deployed to Docker containers.

But perhaps most importantly – in addition to support today’s fashions – the Paremus Service Fabric will remain relevant to tomorrows business challenges.

2016 and the OSGi Alliance?

A decade ago Paremus fused Java’s dynamic service framework (Jini) with OSGi to create a modular, distributed, Microservces platform (known as Infiniflow); thereby creating the ancestor of the current Paremus Service Fabric. Seeing the importance of OSGi (strong modularity / isolation, dynamic dependency resolution, semantic versioning, a nascent but potentially powerful service architecture, all of which defined by open industry standards ) Paremus joined the OSGi Alliance as ‘Adopter Associate’ – a small step, a minimal commitment.

As Service Fabric concepts evolved, OSGi became centre stage. Hence in September 2009 I took the decision to upgrade Paremus to a full Alliance member – the equivalent of today’s OSGi Alliance Strategic Member. Paremus also decided to stand for election to the OSGi Alliance Board.

This was then, as today, a significant commitment for a small UK software company. And each year this decision is re-assessed.

Do Paremus stay as Strategic Members, perhaps downgrade to a lower level, or even follow the example of some in the software industry and consume the benefits of OSGi without being an OSGi Alliance member?


Of course, the financial commitment is only one consideration. If participating in an OSGi Alliance Expert Group (EG), your engineer/s will want to physically attend at least some of the EG meetings each year. As a member of the OSGi Alliance Board you will attend one or more face to face Board meetings each year.

There is also the effort required to actually move an initiative forwards. The OSGi Alliance is not just a talking shop: one or more companies need to contribute intellectual property and commit engineering resource so that a new specification may first be first created, and then life breathed into it via a corresponding reference implementation and conformance tests.

Looking back at 2015 Paremus contributed to the OSGi Alliance in the following ways …

  1. We organised the OSGi Developer Certification events – see https://www.osgi.org/osgi-certification/developer-certification/.
  2. Building upon the success of the Asynchronous RPC specification in 2014 (see – asynchronous-osgi-promises-for-the-masses-osgi-devcon-2014 & osgi-promises), which in-turn trigger the Promises specification led by our IBM colleagues, Paremus started follow-on specification work in the areas of Push-Streams (Reactive Java – see asynchronous-event-streams) & Distributed Eventing – see rfc-0214-DistributedEventing.pdf.
  3. We contributed to the JAX-RS specification work with our Liferay and Adobe colleagues – see rfc-217-JAX-RS-Services.pdf.
  4. Paremus were also heavily involved in the OSGi Alliance’s new IoT efforts including: the initial workshops (Europe & US), the IoT demonstrators at the European OSGi Community events (2014 & 2015), and chairing the new formed IoT EG.
  5. Finally, Paremus continued to man the position of OSGi Treasurer and contributed to general OSGi Alliance’s Marketing activities.

Then there is bndtools.





A high quality development tool chain is critically import to the ongoing success of OSGi. For this reason Paremus continued to invest throughout 2015 in bndtools with Neil Bartlett (bndtools project founder), Tim Ward and Gustavo Morozowski delivering improved Maven support and a new template system. At this point I should also mention the sterling efforts by Peter Kriens (OSGi Alliance) & BJ Hargrave (IBM) in enabling bndtool’s support for the OSGi Alliance’s new Enroute initiative.

But why bother? Isn’t OSGi a niche play – one of a number of alternate modular approaches for Java only environments?

Well No.

OSGi is perhaps the most valuable treasure to have emerge from the entire Java EcoSystem. Modularity and dynamic dependency management are fundamental to driving down maintenance costs, removing technical debt and increasing business agility (see – Design Rules, Volume 1) and OSGi squarely addresses these concerns.

But what about the threatened arrive (sometime during this millennium) of JigSaw? Well the Paremus’ analysis of JigSaw can be found in Neil Bartlett’s post –  jigsaw-is-a-shibboleth. To summarise Neil’s conclusions – JVM modularity is required and JigSaw addresses this requirement. Oracle should get on and deliver it. However JigSaw should not attempt to compete with OSGi at the application layer; here JigSaw is simply not fit for purpose.

But ultimately who cares? In this age of Polygot systems Java is just one of many languages! Surely Paremus efforts would be better spent chasing the current industry fashions: i.e. Docker and REST based Microservices?

True the world isn’t just Java, but Enterprises are predominately Java centric. Also many OSGi modularity concepts are quite generic in nature…

Leveraging OSGi principles, the Paremus Service Fabric does support the deployment of native none Java artefacts to Containers. However, our perspective is different from others in the industry.

Cloud Platforms‘ tend to fixate on resource optimisation. For Cloud Service providers this is understandable as this is where profits are created; i.e. via the simple arbitrage opportunity on compute resources. However this mindset has skewed the software industry’s perceptive of the issues faced by large organisations. Hence, unlike potential Platform competitors (e.g. Docker, Mesosphere, CloudFoundry/Pivotal, Google and others) Paremus does not consider the ability to maximally pack static opaque software artefacts into an unchanging data centre environment as THE fundamental issue. For large organisations this is only a line item in a much larger wish list.

Organisations require robust, secure, low maintenance platforms and hosted applications, that are simple to install, and simple to maintain and adapt. The  platform must allow for architectural flexibility and business agility, while at the same time enforcing governance, re-use and scalability. The ideal platform solution would allow business units the freedom and flexibility to merge into, or detach away from, central IT services as changing business imperatives dictate. Within this spectrum of concerns – optimising resource utilisation is desirable – but much much less important than maximising operational simplicity, platform robustness and enforcing effective corporate governance.

All of these concerns ultimate reduce to the problem of operational management of an increasingly complex set of dynamic runtime dependencies; and OSGi remains the only industry standard that effectively addresses this issue. In contrast ‘Containers‘ do not address the issue – indeed Container Images effectively sidestep the problem. A neat conjuring trick – but a trick none the less. On a positive note – the current Container trend is an incremental step forwards from what proceeded it! Namely the extremely bad idea of deploying ‘applications & operating system‘ combinations as shrink wrapped opaque virtual machine images (VMI’s).

So today, at the start of 2016, OSGi seems to be as fundamentally important to Paremus as when we initially joined the OSGi Alliance in 2006. Paremus will be renewing our OSGi Alliance Strategic Member this year.

Not yet an OSGi Alliance member? Why not join the OSGi Alliance this year and help us deliver the modular & reactive foundations necessary for the next generation of federated Cloud & IoT!

Wishing you both Peace & Enlightenment in 2016


Introducing the Paremus Service Fabric

This post is the first of a series of videos that will introduce and explore the capabilities of the Paremus Service Fabric; a distributed, highly modular, self-managing OSGi based platform.

This first episode demonstrates the creation of a Service Fabric in the Digital Ocean public Cloud environment. Having created a Service Fabric – we then review the Fabric’s management capabilities and deploy a simple example of a modular microservice based application.

This series will go onto explore various unique aspects of the Paremus Service Fabric, including the Fabric’s orchestration, management, adaption and recovery behaviours. The Fabric’s support for highly modular OSGi based applications, as well as supporting more traditional software artefacts via image deployment to Containers, will also be explored.

This and subsequent episodes are also available on the Paremus website.

IoT – Welcome to the real Cloud! (a.k.a Fog Computing?)

Today’s popular Cloud Computing offerings have resulted from the following simple observation. An application (and any associated data) if not coupled to the customer’s local environment, may be relocated to an alternative remote environment. The tool used to achieve this is the Virtual Machine Image (VMI). An application expressed as an opaque VMI artefact, or more recently a Container Image, can be easily deployed to any local or remote compute resource. If the application is a REST/HTTP based Service, then whatever its location, it is simply accessed over the Internet.

In essence, Cloud Computing is a more efficient variant of traditional IT outsourcing. To potential customers Cloud is attractive as:

  1. Applications can be provisioned in minutes rather than perhaps the months typically required for an established organisation with traditional internal processes.
  2. The low cost per unit of the virtual resource through economies of scale; low cost relative to that usually achievable by the customer using his own datacenter resources.

Cloud Computing vendors then sweat these physical assets to generate a profit: i.e. via a combination of Virtual Machine and Container technologies, applications are over provisioned onto the underlying physical hardware to drive up hardware utilisation.

These Cloud economics, and the related engineering challenges, have in-turn shaped – or perhaps warped – the industry’s perception of distributed systems.

  1. To cost effectively scale, the focus has been on building large homogeneous data centre environments. Centralised management services are then used to orchestrate the deployment and configuration of VMI’s / Container images.
  2. As maximising resource utilisation is paramount, orchestration solutions focus on achieving the highest possible resource utilisation across this homogenous infrastructure.
  3. As the unit of deployment is the Virtual Machine Image, or more recently the Container Image, applications are treated as opaque immutable software artefacts. Hence each application must be updated as an atomic unit.
  4. Finally, to avoid changes to the hosted applications the Cloud environments use complex virtualisation and network overlays to isolate each hosted application from the realities of the underlying physical environment.

This last point is worth further consideration. While avoiding application changes; infrastructure virtualisation drives significant operational complexity into the underlying ‘platform’ layers. Also hosted composite applications also become rigidly coupled to these underlying virtualisation layers.

Counter to the marketing, Operational Complexity is always accompanied by increased Operational risk, and rigid coupling always increases the likelihood of Black Swan events (i.e. catastrophic failure).

The Emergence of Big Data 

What about Big Data?

Processing data at scale has co-evolved with these Cloud Computing environments.

Web 2.0 companies business models are centred around offering services (physical or virtual) to their population of users (or websites); and then harvesting and mining the data provided by those customers to drive targeted advertising revenues. Organisations like Google, Yahoo, Facebook, Twitter, LinkedIn generate large amounts of aggregated data which needs to be processed.

In these business models user or web site data is continually aggregated from the passive ‘Edge’ (the user via their phone, tablet or computer), into the Services hosted in the Cloud Core.

It is the processing / mining of this aggregated data that kick-started the “Big Data” gold rush and the plethora of enabling technologies to process large amounts of data as batch (Hadoop), mini-batch (Spark) or even directly from the aggregated data stream (stream processing).

IoT is different!

The Internet of Things (IoT) is actually the inverse of the patterns just described. In many IoT scenarios large amounts of data are generated by the edge and this data is both context specific and temporal in nature:

  1. The data is meaningless taken out of context of the edge environment that generated it.
  2. The value of the data significantly diminishes with time.
  3. Moving the data from the point of generation may be (extremely) costly on WAN network bandwidth.

One might conclude that in such situations data should be processed in-situ.

Also consider that many IoT environments need to operate autonomously: optionally accessing remote third party services; only when appropriate, and when those services are available.

Consider the following examples


Supervisory Control and Data Acquisition (SCADA) systems are used for controlling, monitoring and analysing industrial processes. A SCADA system may monitor and control an entire site, or complexes of systems spread out over large areas (i.e. an industrial plant). Most control actions are performed automatically by Remote Terminal Units (RTU’s) or by Programmable Logic Controllers (PLCs). Host control functions are then usually restricted to basic overriding or supervisory level intervention.

For example, a PLC may control the flow of cooling water through part of an industrial process, but the SCADA system may allow operators to change the set points for the flow, and enable alarm conditions, such as loss of flow and high temperature, to be displayed and recorded. The feedback control loop passes through the RTU or PLC, while the SCADA system monitors the overall performance of the loop.

Clearly the near real-time closed loop behaviour of the SCADA system is critically important – and off-loading supervisory control behaviours to a remote third party Cloud environment is an option few in that industry would consider. Machine learning behaviours need to understand the appropriate context / domain – and be local to the plant being controlled.

Security is also paramount as malicious commands issued into the environment could cause local catastrophes.

Privacy in HealthCare

Privacy, Security and Commercial competition concerns mean that many organisations will only allow a small subset of their data, appropriately filtered / anonymised, to be aggregated and ‘mined’ by a third party.

For example, in the Health Care industry local processing (of body-scan images or diagnostic tests) will be conducted in the patients hospital due to: the volumes of data involved, the need to fully interact with the patient, and patient confidentiality. However, an anonymised subset of data might be made available for population based research studies.

In Service Engine Diagnostics & Performance

Aircraft engine performance and reliability metrics for each engine on each aircraft need to be processed and immediately available to the ground staff responsible for servicing the aircraft. Ground staff might be interested in analysis of the current noise signature from the engines, and comparison of these against the engines ‘historical’ signature, and the signature for that class of engine from the manufacturer.

These on site diagnostic processes need to be executed irrespective of whether an Internet or Cloud Service provider is available. The results from this local analysis may then be passed upstream to parties interested in theses metrics: e.g.

  • The Engine Manufacturer
  • The Aircraft Manufacturer
  • The Carrier
  • The Aviation Authorities in the jurisdictions for the aircrafts flight.

For each of these parties, the data may need to be ‘enriched’ with additional contextual information – e.g. current location, weather conditions, and also filtered so that only the appropriate subset of information is sent to each consumer.

The upstream consumers might in-turn aggregate and process many such data streams for their own analysis. The engine manufacturer may cross correlate the engine noise signature against all other engine signatures received for this engine type. The manufacture would be looking for outlier behaviours (i.e. results that look peculiar) and perhaps the changes in signature as the components in the engines age.

Finally, the results of this analysis may in-turn be passed back to the ground staff at each airport for use next time the engines are locally diagnosed.

Home Automation

Home Automation systems must continue to control the home heating irrespective of the availability of external Services. It is local machine learning behaviours that will analyse trends and trigger exception behaviours if the ambient temperature exceeds thresholds.

It is the local system that must attempt to contact the local Emergency services via whatever communication channels are available at that point. The idea that such interactions must always be routed through a single Cloud provider’s data-centre some 100’s or 1000’s of miles away is not too appealing when the home is on fire. More serious scenarios can be anticipated in the medical device field or industrial control systems.

IoT & the Dawn of Federated Clouds

In each of the previous examples the use of an edge ‘slaved’ to a centralised public Cloud Computing offering seems inappropriate. Instead it is suggested that highly distributed Federated Cloud solutions are required.

Successful IoT solutions will not be shaped by the same factors that have shaped public Cloud Computing offerings: i.e. the economies and efficiencies of the highly centralised compute farms.

Rather, IoT solutions will be shaped by the following inter-related concerns:

  1. Locality, in-situ Data Processing, Filtering (privacy) & Enrichment (context)
  2. Simplicity, Scaleability and Evolvability – the challenges that related to a massively pervasive heterogenous eco-system.

These factors will drive IoT solutions along a technology trajectory to that is quite different to that travelled by centralised Cloud Computing solutions.


In an IoT environment, data generated by the Edge will be processed at / or near the Edge. Processed data may then be consumed by other entities that may or may not lie within the same locality and/or same trust domain.



As the bulk of IoT processing occurs at the Edge:

  1. The software artefacts responsible for that data processing need to be deployed  to the edge.
  2. Due to IoT scale, these software artefacts must be truly modular in nature – allowing only the changed components to be propagated to the appropriate edge targets.
  3. For governance and security, the pedigree of each software component, within each runtime environment, must be known, and the sources of these components, explicitly controlled.
  4. The Edge must filter and enrich ‘with context’ locally acquired data before the resultant derivative data flows to other edge environment or to upstream aggregation based Services.
  5. Data privacy concerns will mean that only filtered, often anonymised, and processed subsets of data will be directed to up-stream interested parties.

While Big Data services will continue to exist in IoT environments, these environments will themselves be more distributed, running in many locations, and consuming enriched, filtered, anonymised data from a massively federated IoT Cloud Edge.

Pervasive IoT will not mirror the current generation of homogenous, centrally managed offerings from public Cloud Computing vendors. But rather than being an exception to the current trend of centralised monolithic Cloud Computing solution, it is suggested that IoT will be fundamental in driving the next generation of ‘true’ Cloud which will be heterogenous, adaptive and federated in nature.

Simplicity, Scaleability & Evolvability!

“Modern-day software systems, even those that presumably function correctly, have a useful and effective shelf life orders of magnitude less than other engineering artifacts. While an application’s lifetime typically cannot be predicted with any degree of accuracy, it is likely to be strongly inversely correlated with the rate and magnitude of change of the ecosystem in which it executes.”

DARPA BRASS initiative 2015

Pervasive IoT will be diverse and massively federated. Heterogenous in nature, the individual elements of IoT will adapt and evolve at different rates in an un-coordinated fashion. IoT will mirror the long-lived and incrementally evolving hierarchical network structures built by the Telcoms and Internet Service Providers!

Federated IoT environments will require similar characteristics to these Telcom networks:

  1. On-Premise IoT Edge Clouds will need to be as near ‘Lights Out’ as possible. Simple to install and simple to maintain by field engineers.
  2. The elements of the Platform and the applications the Platform hosts will need to be self-configuring Field Replaceable Units (FRU’s).
  3. Software deployments and updates must be as simple and error free as possible. Back-out and re-start of applications must be intuitive and rapid.
  4. It must be simple to define secure relationships between the Services across the federated IoT environment.


Pervasive IoT solutions will need longevity. Longevity requires evolvability. Evolvability mandates Modularity.

The Paremus Service Fabric & IoT

Pervasive IoT must be based on appropriate open Industry Standards. To address the Evolvability challenges described by DARPA’s BRASS initiative these industry standards must define the foundations for dynamic modular systems:

  1. Enforce modularity boundaries.
  2. Allow the software modules to be self-describing.
  3. Expose the dependencies between modules, and between modules and their runtime environment, in such a way that automated assembly and re-assembly by the runtime is possible.

OSGi™, the mature open Industry Standard for Java modularity, provides exactly this!

But what about the suitability of Java & OSGi?  Lets remember that Java was actually designed in the mid 1990’s for embedded set top boxes. OSGi was designed in the late 1990’s to bring modularity to these light-weight embedded JVM environments. Both technologies have been waiting for ~20 years for the market to catch up!

Today in 2015, thanks to the ongoing efforts of the OSGi Alliance, OSGi provides the standards based foundations for powerful modular, reactive, evolvable, Microservices based solutions. OSGi used in conjunction with Java JVM modularity, which is due to be introduced in the forthcoming Java 9 release, will together act as powerful accelerants / catalysts driving the adoption of pervasive evolvable IoT solutions.

However, while providing the necessary open Industry Standards based foundations, OSGi & Java they are not in themselves a solution. A runtime platform is required that can dynamically assemble modular OSGi/Java applications at the scales required. To be evolvable, scalable, maintainable, this platform must also fully leverage OSGi.

Enter the Paremus Service Fabric…

The Paremus Service Fabric is a coherent and highly modular platform, with the following runtime attributes:

  • Simplicity (Platform) – Whether Cloud Core or IoT Edge a Service Fabric can be installed with a single command. All compute resources may be treated as Field Replaceable Units (FRU).
  • Simplicity (Application) – Composite Applications are installed with a single command. These applications may then be deployed with a single command – to an embedded controller, or thousands of compute nodes.  All application dependencies are automatically managed.
  • Scaleable – A Service Fabric may run on a single RaspberryPi or across 1,000’s of data-centre compute resources. Resources available to a Fabric may be rapidly and easily expanded or contracted as required.
  • Federated – A Fabric may be aligned with a single application or factory process, or support many local applications / processes; i.e. a local  on-Premise Cloud.
  • Decoupled – When a System is installed – all software components are automatically cached on-Fabric from authorised remote repositories.
  • Reactive – The Service Fabric implements the latest OSGi specifications for Asynchronous and Stream based networking including sophisticated BackPressure and Circuit breaker behaviours. The Service Fabric, from its inception in 2005, has always supported the dynamic scaling of applications based on plugable application specific ‘Replication Handlers’.
  • Evolvable – Most importantly, applications may be rapidly upgraded or rolled-back within each Fabric – with only the modules that require upgrading being replaced within the runtime.

Unlike OSGi and Java, the Paremus Service Fabric was perhaps only 10 years too early in anticipating the requirements of pervasive IoT.

The Paremus Service Fabric will be providing the Cloud runtime for this years OSGi Community IoT Lab -see http://enroute.osgi.org/book/650-trains.html. If you are interested in how OSGi can enable the next generation of adaptive, evolvable federated Internet of Things – drop by and talk to the Paremus team and our OSGi Alliance IoT Ecosystem partners!

Microservices, Platforms & OSGi?

The concept of a ‘Service’ is hardly new. In the late 1990’s Service Oriented Architecture enabled large monolithic business systems to be decomposed into a number of smaller loosely coupled business components.

Modularity all the way Down

The purpose of any ‘Service’ strategy should be to break large monolithic entities into groups of smaller interacting components; i.e. modular systems. The interaction between the components in a modular system is defined by some form of ‘Contract’, ‘Service Agreement’ or ‘Promise’: the nature of which dictates the interaction model between the components; i.e. the ‘architecture’.

Relative to their monolithic counterparts, well designed modular systems are by their nature significantly simpler to change and maintain. Benefits include:

  • Increased Agility – A subset of the components used in the composite system may be rapidly changed in-order to meet new, previously unforeseen business requirements or opportunities.
  • Reduced Maintenance Costs – As long as the contracts between components remain unchanged, the internal implementation of each component can be independently refactored and maintained. The ability to cost effectively maintain the composite system avoids the accrual of technical debt.

Microservices simply continue this modularity trend: i.e. the process of decomposition by breaking business components into a number of finer grained functional components.

The justification is again the same:

  1. To build more scalable, robust and maintainable systems.
  2. To simplify development by assembling composite systems from a number of small single function software components: these simpler to develop in-house, or where appropriate, sourced from third parties.

However the logic that argues that business services should be composed of business components, and that business components should be composed of simpler single functional microservices, also applies to the internal implementation of EACH microservice.

If a microservice is to be maintainable, the internal implementation must be modular.

Microservices: It’s not an Architecture

Modularity concepts are fundamental and will underpin any successful IT strategy. Yet modularity is frequently misunderstood.

One common mistake is to confuse general modularity principles with architectural approaches (currently fashionable or otherwise), issues encountered with vendor implementations, or ill-conceived industry standards. As explained by Kirk Knoernschild, structural modularity and architectural patterns are actually orthogonal concerns.

In the late 1990’s Service Oriented Architecture enabled large monolithic business systems to be decomposed into a number of smaller, but still coarse grained, loosely coupled business components. However, outside the area of Business to Business (B2B), the original implementations – i.e. WS-* protocols and UDDI Directories – are now widely seen as a mistake: rather REST and messaging protocols (either directly or indirectly via a message broker) are the current popular approaches.

Yet looking behind these architectural differences, one can see that modularity principles have been successfully adopted by each approach. Indeed, more so than the advent and influence of the virtual machine, the application of modularity through generic SOA principles is directly responsible for the increasing dominance of today’s commercial Web and Cloud based Services.

No Free Lunch

Being built from a number of simple functional units, a microservices based business application is, in principle, simpler to create, maintain and change.

Yet, as noted by Senior Gartner Analyst Gary Oliffee (http://blogs.gartner.com/gary-olliffe/2015/01/30/microservices-guts-on-the-outside/.), microservices are not a zero cost option. Gary describes microservices from two perspectives:

  1. Internal Structure: Usually a single function service (hence the term microservice) which – in principle – is simple to develop. Also, as the communication mechanism is usually embedded, a microservice is easy to unit test as heavy weight applications servers are not required.
  2. External Structure: This refers to the new platform capabilities that are now needed to help manage the interdependencies, life-cycle and configurations between the myriad of microservices. Whereas the unit test was simple, the integration testing of the complete solution requires the deployment and configuration of all these inter-related components.

To conclude, the ‘observable’ composite system is now significant more complex than the monolithic application it replaced.

The ideal microservices platform?

The purpose of a microservices platform is to shield this runtime complexity from Operations: to automate the discovery, configuration, life-cycle management and governance of a possibly changing set of interdependent runtime entities.

What are the fundamental attributes of an ideal microservices platform? Unfortunately there is no one simple answer, as it depends on context.

All businesses will value a platform’s ability to abstract and shield hosted microservices from the underlying compute resource used. However, whereas a business providing vanilla hosted websites will have very simple application requirements, a business, comprised of many business units, each potential involved in different markets, may have extremely diverse requirements from a common platform.

For the latter group the platform solution must allow for Architectural Agility – meaning the platform solution must not constrain either:

  1. The internal structure of the functional components.
  2. The external structure: The type of interactions allowed between these components.

To promote interoperability, component re-use, and prevent direct or in-direct (via ‘OSS’) vendor lock-in, the platform solution should also be based upon relevant industry standards.

Finally, given that the composite application cannot now function without the microservices platform; the platform itself must be engineered to new levels of robustness and agility and must be evolvable: the platform itself must be extremely modular.

Current Industry Fashions

It is my opinion that the current generation of popular ‘microservice platform‘ offerings fall well short of these objectives. The reason why is easy to understand.

Mainstream vendors pursue ‘low hanging fruit’ by focusing on enabling developers to quickly and easily assemble simple ‘microservice’ based applications. For example, the deployment of simple three Tier Web based applications built upon popular RESTful architectural patterns.


  • Opaque software artifacts are deployed via a light weight container.
  • The container provides isolation in multi-tenancy environments.
  • The services are usually simple REST based services.
  • The platform, which itself is not dissimilar from the previous generation of Grid Compute solutions, provides some level of deployment, discovery and configuration of these simple services.

Yet while providing instant gratification, these same platforms fail to provide sufficient flexibility for more complex applications or diverse business needs:

  1. The platform solution may or may not be transparent to the deployed applications: some platforms enforcing rigid restrictions on inter-container communication.
  2. The platform may fail to adequately address the scoping and versioning of (i.e.  interaction between) these hosted microservices.
  3. The platform may only support a subset of interaction patterns or middleware options.

If one finds oneself ‘force fitting’ a broad set of business applications to a limited set of architectural patterns provided by the microservices platform – then the platform is most probably an inappropriate choice for your organization. More importantly, the platform will continue to remain an inappropriate choice – a point of constriction, reducing business agility without delivering the long term cost saving benefits provided via a modularity first strategy.

Microservices and OSGi

OSGi began in the late 1990’s as the open industry standard for enforcing structural modularity for Java code running within a JVM. OSGi bundles enforce strong isolation:

  • The internal implementation is private to each bundle,
  • The behaviour exposed by the bundle is described by its stated ‘Capabilities‘,
  • The dependencies a bundle has on its local environment are described by its stated ‘Requirements‘.
  • Finally semantic versioning is used. A bundle’s Capabilities are versioned (major.minor.micro). Meanwhile a bundle’s Requirements specify the acceptable version ranges within which the Capabilities of third parties must fall.

Due to strong isolation OSGi bundles may be dynamically loaded or unloaded from a running JVM. As the bundles are self-describing, a process know as ‘resolution’ may be  used to ensure that all inter-related bundles are automatically loaded into the runtime and wired together.

These aspects of OSGi all relate to structural modularity and the concepts are quite generic. Self-describing semantically versioned artifacts are important concepts at all layers of the structural hierarchy.

In an orthogonal decision OSGi also decouples the interaction between bundles via a local Service Registry. In so doing the OSGi Alliance created an extremely powerful microservices architecture for Java. Due to OSGi’s modularity first mindset, OSGi’s service architecture is extremely powerful and evolvable with advertised Service Contracts representing:

  • Synchronous or asynchronous remote procedure calls – with a choice of language specific or agnostic serialization mechanisms.
  • Event based interactions.
  • Message based interaction.
  • Actor style interactions.
  • Or, RESTful based interactions.

Where appropriate, pluggable discovery and serialisation mechanisms are supported.

But OSGi is difficult?

Not anymore.

Well designed modular systems do require some thought. However the OSGi Alliance is actively making OSGi simpler to adopt via ongoing investment in tooling (see http://bndtools.org) and investment in tutorials for typical application patterns. For example, the OSGi enRoute project demonstrates the creation of a simple OSGi based modular application. For such requirements the enRoute tutorial demonstrates that OSGi can be as easily to use as Spring Boot or Dropwizard. Additional OSGi enRoute tutorials are planned which will address other common architectural patterns used in IoT and the Enterprise. The implementation of sophisticated business systems in a modular manner does require an enhanced level of engineering and architecture skills. However the OSGi Alliance again provide support to achieve this in-terms of OSGi Alliance Member training (e.g. Paremus OSGi training) and the new OSGi Developer Certification programme.

To conclude, OSGi provides the basis for a compelling microservices strategy. However unlike the alternatives, this is only part of a larger coherent strategy. OSGi provides the necessary open industry standards upon which the next generation of modular, and so highly maintainable, software systems will be built.

It has been a while…

It has been a while since my last post. In my defence, Paremus have been incredibly busy on a number fronts.

Adoption of OSGi through 2013 / 2014 has been significant and continues to accelerate. While interest in ‘MicroServices’ and Container Technologies like Docker are undeniable; Paremus are increasingly finding that mature organisations realise that complexity, technical debt and maintenance costs can only be addressed if the in-house Java applications are either mothballed or reengineered. Assuming the business functionality is still required, the former simply ignores the problem. In contrast the latter, to avoid repeating the same mistakes, requires structural Modularity; for which the only industry standard is OSGi.

For organizations that appreciate this, and aspire to their own internal distributed Cloud runtime; then BNDTools and the Paremus Service Fabric are no longer a curiosity but a increasingly compelling Build / Release / Run / proposition.

My intent over the next few posts will be to revisit Service Fabric concepts and capabilities and compare and contrast these against current IT industry fashions.

Something along the lines of…

  • An introduction to Service Fabric 1.12 and our ‘Entire’ management framework. A really cool demonstration of the use of OSGi RSA and DTO specifications – crafted by some of the Grand Masters of the Art 🙂
  • A look at the Service Fabric with respect to the Microservices trend: (well the name fits!)
  • Also what about the Service Fabric and non OSGi artefacts? What about Docker?  How is the Service Fabric different to solutions like Mesos and Kubernetes?

And then we’ll get onto the interesting stuff…

Agility and Structural Modularity – part III

The first post in this series explored the fundamental relationship between Structural Modularity and Agility. In the second post we learnt how highly agile, and so highly maintainable, software systems are achievable through the use of OSGi.

This third post is based upon a presentation entitled ‘Workflow for Development, Release and Versioning with OSGi / Bndtools: Real World Challenges‘ (http://www.osgi.org/CommunityEvent2012/Schedule), in which Siemens AG’s Research & Development engineers discussed the business drivers for, and subsequent approach taken to realise, a highly agile OSGi based Continuous Integration environment.

The Requirement

Siemens Corporate Technology Research has a diverse engineering team with skills spanning computer science, mathematics, physics, mechanical engineering and electrical engineering. The group provides solutions to Siemens business units based on neural network technologies and other machine learning algorithms. As Siemens’ business units require working examples rather than paper concepts, Siemens Corporate Technology Research engineers are required to rapidly prototype potential solutions for their business units.


Figure 1: Siemens’ Product Repository

To achieve rapid prototyping the ideal solution would be repository-centric, allowing the Siemens research team to rapidly release new capabilities, and also allowing Siemens Business units to rapidly compose new product offerings.

To achieve this a solution must meet the following high level objectives:

  1. Build Repeatability: The solution must ensure that old versions of products can always be rebuilt from exactly the same set of sources and dependencies, even many years in the future. This would allow Siemens to continue supporting multiple versions of released software that have gone out to different customers.
  2. Reliable Versioning: Siemens need to be able to quickly and reliably assemble a set of components (their own software, third party and open source) and have a high degree of confidence that they will all work together.
  3. Full Traceability: the software artifacts that are released are always exactly the same artifacts that were tested by QA, and can be traced back to their original sources and dependencies. There is no necessity to rebuild in order to advance from the testing state into the released state.

Finally, the individual software artifacts, and the resultant composite products, must have a consistent approach to application launching, life-cycle and configuration.

The Approach

OSGi was chosen as the enabling modularity framework, this decision was based upon the maturity of OSGi technology, the open industry specifications which underpin OSGi implementations, and the technology governance provided by the OSGI Alliance. The envisaged Continuous Integration solution was based upon the use of Development and Release/Production OSGi Bundle Repositories (OBR). As OSGi artefacts are fully self-describing (Requirements and Capabilities metadata), specific business functionality could be dynamically determined via automated dependency resolution and subsequent loading of the required OSGi bundles from the relevant repositories.

The Siemens AG team also wanted to apply WYTIWYR best practices (What You Test Is What You Release). Software artefacts should not be rebuilt post testing to generate the release artefacts; between the start and end of the test cycle the build environment may have changed. Many organisations do rebuild software artefacts as part of the release process (e.g.1.0.0.BETA –> 1.0.0.RELEASE); this unfortunate but common practice is caused by dependency management based on artefact name.

Finally from a technical perspective the solution needed to have the following attributes:

  • Work with standard developer tooling i.e. Java with Eclipse.
  • Have strong support for OSGi.
  • Support the concept of multiple repositories.
  • Support automated Semantic Versioning (i.e. automatic calculation of Import Ranges and incrementing of Export Versions)- as this is too hard for human beings!

For these reasons Bndtools was selected.

The Solution

The following sequence of diagrams explain the key attributes of Siemens AG solution.



Figure 2:  Repository centric, rapid iteration and version re-use within development.

Bndtools is a repository centric tool allowing developers to consume OSGi bundles from one or more OSGi Bundle Repositories (a.k.a OBR). In addition to the local read-write DEV OSGi bundle repository, developers may also consume OSGi bundles from other managed read-only repositories; for example, any combination of corporate Open Source repositories, corporate proprietary code repositories and approved 3rd Party repositories. A developer simply selects the desired repository from the list of authorised repository, the desired artefact in the repository, dragging this into the Bndtools workspace.

Developers check code from their local workspaces into their SVN repository. The SVN repository only contains work in progress (WIP). The Jenkins Continuous Integration server builds, tests and pushes the resultant OSGi artifacts to a shared read-only Development OBR. These artefacts are then immediately accessible by all Developers via Bndtools.

As developers rapidly evolve software artefacts, running many builds each day, it would be unmanageable – indeed meaningless – to increment versions for every development build. For this reason, version re-use is permitted in the Development environment.


Figure 3:  Release.

When ready, a software artefact may be released by the development team to a read-only QA Repository.


Figure 4:  Locked.

Once an artefact has been released to QA it is read-only in the development repository. Any attempt to modify and re-build the artefact will fail. To proceed, the Developer must now increment the version of the released artefact.


Figure 5:  Increment.

Bndtools’ automatic semantic versioning can now be used by the developer to ensure that the correct version increment is applied to express the nature of the difference between the current WIP version and its released predecessor. Following the Semantic Versioning rules discussed in previous posts:

  • 1.0.0 => 1.0.1 … “bug fix”
  • 1.0.0 => 1.1.0 … “new feature”
  • 1.0.0 => 2.0.0 … “breaking change”

we can see that the new version (1.0.1) of the artifact is a “bug fix”.

The Agility Maturity Model – Briefly Revisited

In the previous post we introduced the concept of the Agility Maturity Model. Accessing Siemens’ solution against this model verifies that all the necessary characteristics required of a highly Agile environment have been achieved.

  • Devolution: Enabled via Bndtools’ flexible approach to the use of OSGi repositories.
  • Modularity & Services: Integral to the solution. Part and parcel of the decision to adopt an OSGi centric approach.

As discussed by Kirk Knoernschild in his DEVOXX 2012 presentation ‘Architecture All the Way Down‘, while the Agile Movement have focused extensively on the Social and Process aspects of achieving Agile development, the fundamental enabler – ‘Structural Modularity’ – has received little attention. Those of you that have attempted to realise ‘Agile’ with a monolithic code base will be all to aware of the challenges. Siemens’ decision to pursue Agile via structural modularity via OSGi provides the bedrock upon which Siemens’ Agile aspirations, including the Social and Process aspects of Agile development, can be fully realised.

Bndtools was key enabler for Siemens’ Agile aspirations. In return, Siemens’ business requirements helped accelerate and shape key Bndtools capabilities. At this point I would like to take the opportunity to thank Siemens AG for allowing their work to be referenced by Paremus and the OSGi Alliance.

More about Bndtools

Built upon Peter Kriens‘ bnd project, the industries de-facto tool for creation of OSGi bundles, the Bndtools GITHUB project was created by Neil Bartlett early 2009. Bndtools roots included tooling that Neil developed to assist students attending his OSGi training course and the Paremus SIGIL project.

Bndtools objectives have been stated by Neil Bartlett  on numerous occasions. The goal, quite simply is to make is easier to develop Agile, Modular Java applications, than not. As demonstrated by the Siemens’ project, Bndtools is rapidly achieving this fundamental objective. Bndtools is backed by an increasing vibrant open source community with increasing support from a number of software vendors; including long term commitment from Paremus. Current Bndtool community activities include support for OSGi Blueprint, stronger integration with Maven and the ability to simply load runtime release adaptors for OSGi Cloud environments like the Paremus Service Fabric.

Further detail on the rational for building Java Continuous Integration build / release chains on OSGi / Bndtools can be found in the following presentation given by Neil Bartlett to the Japan OSGi User Forum, May 2013: NeilBartlett-OSGiUserForumJapan-20130529. For those interested in pursuing a Java / OSGi Agile strategy, Paremus provide in-depth engineer consultancy services to help you realise this objective. Paremus can also provide in-depth on-site OSGi training for your in-house engineering teams. If interest in ether consulting or training please contact us.

The Final Episode

In the final post in this Agility and Structural Modularity series I will discuss Agility and Runtime Platforms. Agile runtime platforms are the area that Paremus has specialised in since the earliest versions of our Service Fabric product in 2004 (then referred to as Infiniflow), the pursuit of runtime Agility prompted our adoption of OSGi in 2005, and our membership of the OSGi Alliance in 2009.

However, as will be discussed, all OSGi runtime environments are not alike. While OSGi is a fundamental enabler for Agile runtimes,  in itself, the use of OSGi is not sufficient to guarantee runtime Agility. It is quite possibly to build ‘brittle’ systems using OSGi. ‘Next generation’ modular dynamic platforms like the Paremus Service Fabric must not only leverage OSGi, but must also leverage the same fundamental design principles upon which OSGi is itself based.

Agility and Structural Modularity – part II

In this second Agility and Structural Modularity post we explore the importance of OSGi™; the central role that OSGi plays in realising Java™ structural modularity and the natural synergy between OSGi and the aims of popular Agile methodologies.

But we are already Modular!

Most developers appreciate that applications should be modular. However, whereas the need for logical modularity was rapidly embraced in the early years of Object Orientated programming (see http://en.wikipedia.org/wiki/Design_Patterns), it has taken significantly longer for the software industry to appreciate the importance of structural modularity; especially the fundamental importance of structural modularity with respect to increasing application maintainability and controlling / reducing  environmental complexity.

Just a Bunch of JARs

In Java Application Architecture, Kirk Knoernschild explores structural modularity and develops a set of best practice structural design patterns. As Knoernschild explains, no modularity framework is required to develop in a modular fashion; for Java the JAR is sufficient.cover-small-229x300

Indeed, it is not uncommon for ‘Agile’ development teams to break an application into a number of smaller JAR’s as the code-base grows. As JAR artifacts increase in size, they are broken down into collections of smaller JAR’s. From a code perspective, especially if Knoernschild’s structural design patterns have been followed, one would correctly conclude that – at one structural layer – the application is modular.

But is it ‘Agile’ ?

From the perspective of the team that created the application, and who are subsequently responsible for its on-going maintenance, the application is more Agile. The team understand the dependencies and the impact of change. However, this knowledge is not explicitly associated with the components. Should team members leave the company, the application and the business are immediately compromised. Also, for a third party (e.g. a different team within the same organisation), the application may as well have remained a monolithic code-base.

While the application has one layer of structural modularity – it is not self-describing. The metadata that describes the inter-relationship between the components is absent; the resultant business system is intrinsically fragile.

What about Maven?

Maven artifacts (Project Object Model – POM) also express dependencies between components. These dependencies are expressed in-terms of the component names.

A Maven based modular application can be simply assembled by any third party. However, as we already know from the first post in this series, the value of name based dependencies is severely limited. As the dependencies between the components are not expressed in terms of Requirements and Capabilities,  third parties are unable to deduce why the dependencies exist and what might be substitutable.

It is debatable whether Maven makes any additional tangible contribution to our goal of application Agility.

The need for OSGi

As Knoernschild demonstrates in his book Java Application Architecture, once structural modularity is achieved, it is trivially easy to move to OSGi – the modularity standard for Java. 

Not only does OSGi help us enforce structural modularity, it provides the necessary metadata to ensure that the Modular Structures we create are also Agile structures

OSGi expresses dependencies in terms of Requirements and Capabilities. It is therefore immediately apparent to a third party which components may be interchanged. As OSGi also uses semantic versioning, it is immediately apparent to a third party whether a change to a component is potentially a breaking change.

OSGi also has a key part to play with respect to structural hierarchy.

At one end of the modularity spectrum we have Service Oriented Architectures, at  the other end of the spectrum we have Java Packages and Classes. However, as explained by Knoernschild, essential layers are missing between these two extremes.


Figure 1: Structural Hierarchy: The Missing Middle (Kirk Knoernschild – 2012).

The problem, this missing middle, is directly addressed by OSGi.


Figure 2: Structural Hierarchy: OSGi Services and Bundles

As explained by Knoernschild the modularity layers provided by OSGi address a number of critical considerations:

  • Code Re-Use: Via the concept of the OSGi Bundle, OSGi enables code re-use.
  • Unit of Intra / Inter Process Re-Use: OSGi Services are light-weight Services that are able to dynamically find and bind to each other. OSGi Services may be collocated within the same JVM, or via use of an implementation of OSGi’s remote service specification, distributed across JVM’s separated by a network. Coarse grained business applications may be composed from a number of finer grained OSGi Services.
  • Unit of Deployment: OSGi bundles provide the basis for a natural unit of deployment, update & patch.
  • Unit of Composition: OSGi bundles and Services are essential elements in the composition hierarchy.

Hence OSGi bundles and services, backed by OSGi Alliance’s open specifications, provide Java with essential – and previously missing – layers of structural modularity. In principle, OSGi technologies enable Java based business systems to be ‘Agile – All the Way Down!’.

As we will now see, the OSGi structures (bundles and services) map well to, and help enable, popular Agile Methodologies.

Embracing Agile

The Agile Movement focuses on the ‘Processes’ required to achieve Agile product development and delivery. While a spectrum of Lean & Agile methodologies exist, each tends to be; a variant of, a blend of, or an extension to, the two best known methodologies; namely Scrum and Kanbanhttp://en.wikipedia.org/wiki/Lean_software_development.

To be effective each of these approaches requires some degree of structural modularity.


Customers change their minds. Scrum acknowledges the existence of ‘requirement churn’ and adopts an empirical (http://en.wikipedia.org/wiki/Empirical) approach to software delivery. Accepting that the problem cannot be fully understood or defined up front. Scrum’s focus is instead on maximising the team’s ability to deliver quickly and respond to emerging requirements.

Scrum is an iterative and incremental process, with the ‘Sprint’ being the basic unit of development. Each Sprint is a “time-boxed” (http://en.wikipedia.org/wiki/Timeboxing) effort, i.e. it is restricted to a specific duration. The duration is fixed in advance for each Sprint and is normally between one week and one month. A Sprint is preceded by a planning meeting, where the tasks for the Sprint are identified and an estimated commitment for the Sprint goal is made. This is followed by a review or retrospective meeting, where the progress is reviewed and lessons for the next Sprint are identified.

During each Sprint, the team creates finished portions of a product. The set of features that go into a Sprint come from the product backlog, which is an ordered list of requirements (http://en.wikipedia.org/wiki/Requirement).

Scrum attempts to encourage the creation of self-organizing teams, typically by co-location of all team members, and verbal communication between all team members.


‘Kanban’ originates from the Japanese word “signboard” and traces back to Toyota, the Japanese automobile manufacturer in the late 1940’s ( see http://en.wikipedia.org/wiki/Kanban ). Kanban encourages teams to have a shared understanding of work, workflow, process, and risk; so enabling the team to build a shared comprehension of a problems and suggest improvements which can be agreed by consensus.

From the perspective of structural modularity, Kanban’s focus on work-in-progress (WIP), limited pull and feedback are probably the most interesting aspects of the methodology:

  1. Work-In-Process (WIP) should be limited at each step of a multi-stage workflow. Work items are “pulled” to the next stage only when there is sufficient capacity within the local WIP limit.
  2. The flow of work through each workflow stage is monitored, measured and reported. By actively managing ‘flow’, the positive or negative impact of continuous, incremental and evolutionary changes to a System can be evaluated.

Hence Kanban encourages small continuous, incremental and evolutionary changes. As the degree of structural modularity increases, pull based flow rates also increase while each smaller artifact spends correspondingly less time in a WIP state.


An Agile Maturity Model

Both Scrum and Kanban’s objectives become easier to realize as the level of structural modularity increases. Fashioned after the Capability Maturity Model (see http://en.wikipedia.org/wiki/Capability_Maturity_Model – which allows organisations or projects to measure the improvements on a software development process), the Modularity Maturity Model is an attempt to describe how far along the modularity path an organisation or project might be; this proposed by Dr Graham Charters at the OSGi Community Event 2011. We now extend this concept further, mapping an organisation’s level of Modularity Maturity to its Agility.

Keeping in step with the Modularity Maturity Model we refer to the following six levels.

Ad Hoc – No formal modularity exists. Dependencies are unknown. Java applications have no, or limited, structure. In such environments it is likely that Agile Management Processes will fail to realise business objectives.

Modules – Instead of classes (or JARs of classes), named modules are used with explicit versioning. Dependencies are expressed in terms of module identity (including version). Maven, Ivy and RPM are examples of modularity solutions where dependencies are managed by versioned identities. Organizations will usually have some form of artifact repository; however the value is compromised by the fact that the artifacts are not self-describing in terms of their Capabilities and Requirements.

This level of modularity is perhaps typical for many of today’s in-house development teams. Agile processes such are Scrum are possible, and do deliver some business benefit. However ultimately the effectiveness & scalability of the Scrum management processes remain limited by deficiencies in structural modularity; for example Requirements and Capabilities between the Modules usually being verbally communicated. The ability to realize Continuous Integration (CI) is again limited by ill-defined structural dependencies.

Modularity – Module identity is not the same as true modularity. As we’ve seen Module dependencies should be expressed via contracts (i.e. Capabilities and Requirements), not via artifact names. At this point, dependency resolution of Capabilities and Requirements becomes the basis of a dynamic software construction mechanism. At this level of structural modularity dependencies will also be semantically versioned.

With the adoption of a modularity framework like OSGi the scalability issues associated with the Scrum process are addressed. By enforcing encapsulation and defining dependencies in terms of Capabilities and Requirements, OSGi enables many small development teams to efficiently work independently and in parallel. The efficiency of Scrum management processes correspondingly increases. Sprints can be clearly associated with one or more well defined structural entities i.e. development or refactoring of OSGi bundles. Meanwhile Semantic versioning enables the impact of refactoring is efficiently communicated across team boundaries. As the OSGi bundle provides strong modularity and isolation, parallel teams can safely Sprint on different structural areas of the same application.

Services – Services-based collaboration hides the construction details of services from the users of those services; so allowing clients to be decoupled from the implementations of the providers. Hence, Services encourage loose-coupling. OSGi Services‘ dynamic find and bind behaviours directly enable loose-coupling, enabling the dynamic formation, or assembly of, composite applications. Perhaps of greater import, Services are the basis upon which runtime Agility may be realised; including rapid enhancements to business functionality, or automatic adaption to environmental changes.

Having achieved this level of structural modularity an organization may simply and naturally apply Kanban principles and achieve the objective of Continuous Integration.

Devolution – Artifact ownership is devolved to modularity-aware repositories which encourage collaboration and enable governance. Assets may selected on their stated Capabilities. Advantages include:

  • Greater awareness of existing modules
  • Reduced duplication and increased quality
  • Collaboration and empowerment
  • Quality and operational control

As software artifacts are described in terms of a coherent set of Requirements and Capabilities, developers can communicate changes (breaking and non-breaking) to third parties through the use of semantic versioning. Devolution allows development teams to rapidly find third-party artifacts that meet their Requirements. Hence Devolution enables significantly flexibility with respect to how artifacts are created, allowing distributed parties to interact in a more effective and efficient manner. Artifacts may be produced by other teams within the same organization, or consumed from external third parties. The Devolution stage promotes code re-use and efficient, low risk, out-sourcing, crowd-sourcing, in-sources of the artifact creation process.

Dynamism This level builds upon Modularity, Services & Devolution and is the culminatation of our Agile journey.

  • Business applications are rapidly assembled from modular components.
  • As strong structural modularity is enforced (isolation by the OSGi bundle boundary),  components may be efficiently and effectively created and maintained by a number of small – on-shore, near-shore or off-shore developement teams.
  • As each application is self-describing, even the most sophisticated of business systems is simple to understand, to maintain, to enhance.
  • As semantic versioning is used; the impact of change is efficiently communicated to all interested parties, including Governance & Change Control processes.
  • Software fixes may be hot-deployed into production – without the need to restart the business system.
  • Application capabilities may be rapidly extended applied, also without needing to restart the business system.

Finally, as the dynamic assembly process is aware of the Capabilities of the hosting runtime environment, application structure and behavior may automatically adapt to location; allowing transparent deployment and optimization for public Cloud or traditional private datacentre environments.


Figure 3: Modularity Maturity Model

An organization’s Modularisation Migration strategy will be defined by the approach taken to traversing these Modularity levels. Mosts organizations will have already moved from an initial Ad- Hoc phase to Modules. Meanwhile organizations that value a high degree of Agility will wish to reach the endpoint; i.e. Dynamism. Each organisation may traverse from Modules to Dynamism via several paths; adapting migration strategy as necessary.

  • To achieve maximum benefit as soon as possible; an organization may choose to move directly to Modularity by refactor the existing code base into OSGi bundles. The benefits of Devolution and Services naturally follow. This is also the obvious strategy for new greenfield applications.
  • For legacy applications an alternative may be to pursue a Services first approach; first expressing coarse grained software components as OSGi Services; then driving code level modularity (i.e. OSGi bundles) on a Service by Service basis. This approach may be easier to initiate within large organizations with extensive legacy environments.
  • Finally, one might move first to limited Devolution by adoption OSGi metadata for existing artifacts. Adoption of Requirements and Capabilities, and the use of semantic versioning, will clarify the existing structure and impact of change to third parties. While structural modularity has not increased, the move to Devolution positions the organisation for subsequent migration to the Modularity and Services levels.

diverse set of choices and the ability to pursue these choices as appropriate, is exactly what one would hope for, expect from, an increasingly Agile environment!

Agility and Structural Modularity – part I


Agile development methodologies are increasingly popular. Yet most ‘Agile’ experts and analysts discuss agility in isolation.  This oversight is surprising given that ‘Agility’ is an emergent characteristic; this meaning a property of the underlying entity. For an entity to be ‘Agile’ it must have a high degree of structural modularity.

Perhaps as a result of this, many organisations attempt to invest in ‘Agile’ processes without ever considering the structure of their applications. Alongside the question, ‘How might one realise an Agile system?’, one must also ask, ‘How might one build systems with high degrees of structural modularity?’.

We start this series of blog articles by exploring the relationship between structural modularity and agility.


Structure, Modularity & Agility 

Business Managers and Application Developers face many of the same fundamental challenges. Whether a business, or a software application serving a business, the entity must be cost effective to create and maintain. If the entity is to endure, it must also be able to rapidly adapt to unforeseen changes in a cost effective manner.

If we hope to effectively manage a System, we must first understand the System. Once we understand a System, manageable Change and directed Evolution are possible.

Yet we do not need to understand all of the fundamental constituents of the System; we only need to understand the relevant attributes and behaviors for the level of the hierarchy we are responsible for managing.

Services should be Opaque

From an external perspective, we are interested in the exposed behavior; the type of Service provided, and the properties of that Service. For example is the Service reliable? Is it competitively priced relative to alternative options?



Figure 1: A consumer of a Service.

As a consumer of the Service I have no interest in how these characteristics are achieved. I am only interested in the advertised Capabilities, which may or may not meet my Requirements.

To Manage I need to understand Structure

Unlike the consumer, the implementation of the Service is of fundamental importance to the Service provider. To achieve an understanding, we create a conceptual model by breaking the System responsible for providing the Service into a set of smaller interconnected pieces. This graph of components may represent an ‘Organization Chart’ , if the entity is a business, or a mapping of the components used,  if the entity is a software application.

A first simple attempt to understand our abstract System is shown below.


diag2Figure 2: The Service provider / System Maintainer

From this simple representation we immediately know the following:

  • The System is composed of 15 Components.
  • The names of the Components.
  • The dependencies that exist between these Components; though we don’t know why those dependencies exist.
  • While we do not know the responsibilities of the individual Components; from the degree on inter-connectedness, we can infer that component ‘Tom’ is probably more important than ‘Dick’.

It is important to note that, we may not have created these Components, we may have no understanding of their internal construction. Just as the consumers of our Service are interested in the Capabilities offered, we, as a consumer of these components, simply Require their Capabilities.

Requirements & Capabilities

At present, we have no idea why the dependencies exist between the Components, just that those dependencies exist. Also, this is a time independent view. What about change over time?

One might initially resort to using versions or version ranges with the named entities; changes in the structure indicated by version changes on the constituents. However, as shown in figure 3, versioned names, while indicating change, fail to explain why Susan 1.0 can work with Tom 2.1, but Susan 2.0 cannot!

Why is this?


Figure 3: How do we track structural change over time? The earlier System functioned correctly; the later System – with an upgraded Component – fails. Why is this?

It is only when we look at the Capabilities and Requirements of the entities involved that we understand the issue. Tom 2.1 Requires a Manager Capability, a capability that can be provided by Susan 1.0. However, at the later point in time  Susan 2.0, having reflected upon her career, decided to retrain. Susan 2.0  now no longer advertises a Manager Capability, but instead advertises a  Plumber 1.0 Capability.

This simple illustration demonstrates that dependencies need to be expressed in terms of Requirements and Capabilities of the participating entities and not their names.

These descriptions should also be intrinsic to the entities; i.e. components should be self-describing.


Figure 4: An Organizational Structure: Defined in terms of Capabilities & Requirements with the use of Semantic versioning.

As shown, we can completely describe the System in terms of Requirements and Capabilities, without referencing specific named entities.

Evolution and the role of Semantic Versioning

Capabilities and Requirements are now the primary means via which we understand the structure of our System. However we are still left with the problem of understanding change over time.

  • In an organization chart; to what degree are the dependencies still valid if an employee is promoted (Capabilities enhanced)?
  • In a graph of interconnected software components; to what degree are the dependencies still valid if we refactor one of the components (changing / not changing a public interface)?

By applying simple versioning we can see that changes have occurred; however we do not understand the impact of these changes. However, if instead of simple versioning, semantic versioning is used (see http://www.osgi.org/wiki/uploads/Links/SemanticVersioning.pdf), the potential impact of a change can be communicated.

This is achieved in the following manner:

  • Capabilities are versioned with a major.minor.micro versioning scheme. In addition, we collectively agree that – minor or micro version changes represent non-breaking changes; e.g.  2.7.1 2.8.7. In contrast major version changes; e.g.  2.7.1. 3.0.0. represent breaking changes which may affect the users of our component.
  • Requirements are now specified in terms of a range of acceptable Capabilities. Square brackets ‘[‘ and ‘]‘ are used to indicate inclusive and parentheses ‘(‘ and ‘)‘ to indicate exclusive. Hence a range [2.7.1, 3.0.0) means any Capability with version at or above  2.7.1 is acceptable up to, but not including 3.0.0.

Using this approach we can see that if Joe is substituted for Helen, Tom’s Requirements are still met. However Harry, while having a Manager Capability, cannot meet Tom’s Requirements as Harry’s 1.7 skill set is outside of the  acceptable range for Tom i.e. [2,3).

Via the use of semantic versioning the impact of change can be communicated. Used in conjunction with Requirements and Capabilities we now have sufficient information to be able to substitute components while ensuring that all the structural dependencies continue to be met.

Our job is almost done. Our simple System is Agile & Maintainable!


Agile – All the Way Down

The final challenge concerns complexity. What happens when the size and sophistication of the System increases? An increased number of components and a large increase in inter-dependencies? The reader having already noticed a degree of self-similarity arising in the previous examples may have already guessed the answer.

The Consumer of our Service selected our Service because the advertised Capabilities met the consumers Requirements (see figure 1). The implementation of the System which provided this Service is masked from the consumer. This pattern is once again repeated one layer down. The System’s structure is itself described in-terms of the Capabilities and Requirements of the participating components (see figure 4). This time, the internal structure of the components are masked from the System. As shown in figure 5; this pattern may be re-repeated at many logical layers.


Figure 5: An Agile Hierarchy: Each layer only exposes the necessary information. Each layer is composite with the dependencies between the participating components expressed in-terms of their Requirements and Capabilities.

All truly Agile systems are built this way, consisting of a hierarchy of structural layers. Within each structural layer the components are self-describing: self-describing in terms of information relevant to that layer, with unnecessary detail from lower layers masked.

This pattern is repeated again and again throughout natural and man-made systems. Natural ecosystems build massive structures from nested hierarchies of modular components:

  • The Organism
  • The Organ
  • The Tissue
  • The Cell

For good reason, commercial organizations attempt the same structures:

  • The Organization
  • The Division
  • The Team
  • The Individual

Hence we might expect a complex Agile software systems to also mirror these best practices:

  • The Business Service
  • Coarse grained business components.
  • Fine grained micro-Services.
  • Code level modularity.

This process started in the mid/late 1990‘s as organizations started to adopt coarse grain modularity as embodied by Service Oriented Architectures (SOA) and Enterprise Service Buses (ESB’s). These approaches allowed business applications to be loosely coupled; interacting via well defined service interfaces or message types. SOA advocates promised more ‘Agile’ IT environments as business systems would be easier to upgrade and/or replace.

However, in many cases the core applications never actually changed. Rather the existing application interfaces were simply exposed as SOA Services. When viewed in this light it is not surprising that SOA failed to deliver the promised cost savings and business agility: http://apsblog.burtongroup.com/2009/01/soa-is-dead-long-live-services.html.

Because of the lack of internal modularity, each post-SOA application was as inflexible as its pre-SOA predecessor.


To be Agile?

We conclude this section with a brief summary of the arguments developed so far.

To be ‘Agile’ a System will exhibit the following characteristics:

  • A Hierarchical Structure: The System will be hierarchical. Each layer composed from components from the next lower layer.
  • Isolation: For each structural layer; strong isolation will ensure that the internal composition of each participating component will be masked.
  • Abstraction: For each layer; the behavior of participating components is exposed via stated Requirements and Capabilities.
  • Self-Describing: Within each layer the relationship between the participating components will be self-describing; i.e. dependencies will be defined in terms of published Requirements and Capabilities.
  • Impact of Change: Via semantic versioning the impact of a change on dependencies can be expressed.

Systems built upon these principles are:

  • Understandable: The System’s structure may be understood at each layer in the structural hierarchy.
  • Adaptable: At each layer in the hierarchy, structural modularity ensures that changes remains localized to the affect components; the boundaries created by strong structural modularity shielding the rest of the System from these changes.
  • Evolvable: Components within each layer may be substituted; the System supports diversity and is therefore evolvable.

The System achieves Agility through structural modularity.

In the next post in this series we will discover how OSGi™ – the Java™ Modularity framework – meets the requirements of structure modularity, and thereby provides the necessary foundations for popular Agile Methodologies and ultimately, Agile businesses.