IoT – Welcome to the real Cloud! (a.k.a Fog Computing?)

Today’s popular Cloud Computing offerings have resulted from the following simple observation. An application (and any associated data) if not coupled to the customer’s local environment, may be relocated to an alternative remote environment. The tool used to achieve this is the Virtual Machine Image (VMI). An application expressed as an opaque VMI artefact, or more recently a Container Image, can be easily deployed to any local or remote compute resource. If the application is a REST/HTTP based Service, then whatever its location, it is simply accessed over the Internet.

In essence, Cloud Computing is a more efficient variant of traditional IT outsourcing. To potential customers Cloud is attractive as:

  1. Applications can be provisioned in minutes rather than perhaps the months typically required for an established organisation with traditional internal processes.
  2. The low cost per unit of the virtual resource through economies of scale; low cost relative to that usually achievable by the customer using his own datacenter resources.

Cloud Computing vendors then sweat these physical assets to generate a profit: i.e. via a combination of Virtual Machine and Container technologies, applications are over provisioned onto the underlying physical hardware to drive up hardware utilisation.

These Cloud economics, and the related engineering challenges, have in-turn shaped – or perhaps warped – the industry’s perception of distributed systems.

  1. To cost effectively scale, the focus has been on building large homogeneous data centre environments. Centralised management services are then used to orchestrate the deployment and configuration of VMI’s / Container images.
  2. As maximising resource utilisation is paramount, orchestration solutions focus on achieving the highest possible resource utilisation across this homogenous infrastructure.
  3. As the unit of deployment is the Virtual Machine Image, or more recently the Container Image, applications are treated as opaque immutable software artefacts. Hence each application must be updated as an atomic unit.
  4. Finally, to avoid changes to the hosted applications the Cloud environments use complex virtualisation and network overlays to isolate each hosted application from the realities of the underlying physical environment.

This last point is worth further consideration. While avoiding application changes; infrastructure virtualisation drives significant operational complexity into the underlying ‘platform’ layers. Also hosted composite applications also become rigidly coupled to these underlying virtualisation layers.

Counter to the marketing, Operational Complexity is always accompanied by increased Operational risk, and rigid coupling always increases the likelihood of Black Swan events (i.e. catastrophic failure).

The Emergence of Big Data 

What about Big Data?

Processing data at scale has co-evolved with these Cloud Computing environments.

Web 2.0 companies business models are centred around offering services (physical or virtual) to their population of users (or websites); and then harvesting and mining the data provided by those customers to drive targeted advertising revenues. Organisations like Google, Yahoo, Facebook, Twitter, LinkedIn generate large amounts of aggregated data which needs to be processed.

In these business models user or web site data is continually aggregated from the passive ‘Edge’ (the user via their phone, tablet or computer), into the Services hosted in the Cloud Core.

It is the processing / mining of this aggregated data that kick-started the “Big Data” gold rush and the plethora of enabling technologies to process large amounts of data as batch (Hadoop), mini-batch (Spark) or even directly from the aggregated data stream (stream processing).

IoT is different!

The Internet of Things (IoT) is actually the inverse of the patterns just described. In many IoT scenarios large amounts of data are generated by the edge and this data is both context specific and temporal in nature:

  1. The data is meaningless taken out of context of the edge environment that generated it.
  2. The value of the data significantly diminishes with time.
  3. Moving the data from the point of generation may be (extremely) costly on WAN network bandwidth.

One might conclude that in such situations data should be processed in-situ.

Also consider that many IoT environments need to operate autonomously: optionally accessing remote third party services; only when appropriate, and when those services are available.

Consider the following examples

M2M / SCADA

Supervisory Control and Data Acquisition (SCADA) systems are used for controlling, monitoring and analysing industrial processes. A SCADA system may monitor and control an entire site, or complexes of systems spread out over large areas (i.e. an industrial plant). Most control actions are performed automatically by Remote Terminal Units (RTU’s) or by Programmable Logic Controllers (PLCs). Host control functions are then usually restricted to basic overriding or supervisory level intervention.

For example, a PLC may control the flow of cooling water through part of an industrial process, but the SCADA system may allow operators to change the set points for the flow, and enable alarm conditions, such as loss of flow and high temperature, to be displayed and recorded. The feedback control loop passes through the RTU or PLC, while the SCADA system monitors the overall performance of the loop.

Clearly the near real-time closed loop behaviour of the SCADA system is critically important – and off-loading supervisory control behaviours to a remote third party Cloud environment is an option few in that industry would consider. Machine learning behaviours need to understand the appropriate context / domain – and be local to the plant being controlled.

Security is also paramount as malicious commands issued into the environment could cause local catastrophes.

Privacy in HealthCare

Privacy, Security and Commercial competition concerns mean that many organisations will only allow a small subset of their data, appropriately filtered / anonymised, to be aggregated and ‘mined’ by a third party.

For example, in the Health Care industry local processing (of body-scan images or diagnostic tests) will be conducted in the patients hospital due to: the volumes of data involved, the need to fully interact with the patient, and patient confidentiality. However, an anonymised subset of data might be made available for population based research studies.

In Service Engine Diagnostics & Performance

Aircraft engine performance and reliability metrics for each engine on each aircraft need to be processed and immediately available to the ground staff responsible for servicing the aircraft. Ground staff might be interested in analysis of the current noise signature from the engines, and comparison of these against the engines ‘historical’ signature, and the signature for that class of engine from the manufacturer.

These on site diagnostic processes need to be executed irrespective of whether an Internet or Cloud Service provider is available. The results from this local analysis may then be passed upstream to parties interested in theses metrics: e.g.

  • The Engine Manufacturer
  • The Aircraft Manufacturer
  • The Carrier
  • The Aviation Authorities in the jurisdictions for the aircrafts flight.

For each of these parties, the data may need to be ‘enriched’ with additional contextual information – e.g. current location, weather conditions, and also filtered so that only the appropriate subset of information is sent to each consumer.

The upstream consumers might in-turn aggregate and process many such data streams for their own analysis. The engine manufacturer may cross correlate the engine noise signature against all other engine signatures received for this engine type. The manufacture would be looking for outlier behaviours (i.e. results that look peculiar) and perhaps the changes in signature as the components in the engines age.

Finally, the results of this analysis may in-turn be passed back to the ground staff at each airport for use next time the engines are locally diagnosed.

Home Automation

Home Automation systems must continue to control the home heating irrespective of the availability of external Services. It is local machine learning behaviours that will analyse trends and trigger exception behaviours if the ambient temperature exceeds thresholds.

It is the local system that must attempt to contact the local Emergency services via whatever communication channels are available at that point. The idea that such interactions must always be routed through a single Cloud provider’s data-centre some 100’s or 1000’s of miles away is not too appealing when the home is on fire. More serious scenarios can be anticipated in the medical device field or industrial control systems.

IoT & the Dawn of Federated Clouds

In each of the previous examples the use of an edge ‘slaved’ to a centralised public Cloud Computing offering seems inappropriate. Instead it is suggested that highly distributed Federated Cloud solutions are required.

Successful IoT solutions will not be shaped by the same factors that have shaped public Cloud Computing offerings: i.e. the economies and efficiencies of the highly centralised compute farms.

Rather, IoT solutions will be shaped by the following inter-related concerns:

  1. Locality, in-situ Data Processing, Filtering (privacy) & Enrichment (context)
  2. Simplicity, Scaleability and Evolvability – the challenges that related to a massively pervasive heterogenous eco-system.

These factors will drive IoT solutions along a technology trajectory to that is quite different to that travelled by centralised Cloud Computing solutions.

Locality

In an IoT environment, data generated by the Edge will be processed at / or near the Edge. Processed data may then be consumed by other entities that may or may not lie within the same locality and/or same trust domain.

Flow

 

As the bulk of IoT processing occurs at the Edge:

  1. The software artefacts responsible for that data processing need to be deployed  to the edge.
  2. Due to IoT scale, these software artefacts must be truly modular in nature – allowing only the changed components to be propagated to the appropriate edge targets.
  3. For governance and security, the pedigree of each software component, within each runtime environment, must be known, and the sources of these components, explicitly controlled.
  4. The Edge must filter and enrich ‘with context’ locally acquired data before the resultant derivative data flows to other edge environment or to upstream aggregation based Services.
  5. Data privacy concerns will mean that only filtered, often anonymised, and processed subsets of data will be directed to up-stream interested parties.

While Big Data services will continue to exist in IoT environments, these environments will themselves be more distributed, running in many locations, and consuming enriched, filtered, anonymised data from a massively federated IoT Cloud Edge.

Pervasive IoT will not mirror the current generation of homogenous, centrally managed offerings from public Cloud Computing vendors. But rather than being an exception to the current trend of centralised monolithic Cloud Computing solution, it is suggested that IoT will be fundamental in driving the next generation of ‘true’ Cloud which will be heterogenous, adaptive and federated in nature.

Simplicity, Scaleability & Evolvability!

“Modern-day software systems, even those that presumably function correctly, have a useful and effective shelf life orders of magnitude less than other engineering artifacts. While an application’s lifetime typically cannot be predicted with any degree of accuracy, it is likely to be strongly inversely correlated with the rate and magnitude of change of the ecosystem in which it executes.”

DARPA BRASS initiative 2015

Pervasive IoT will be diverse and massively federated. Heterogenous in nature, the individual elements of IoT will adapt and evolve at different rates in an un-coordinated fashion. IoT will mirror the long-lived and incrementally evolving hierarchical network structures built by the Telcoms and Internet Service Providers!

Clouds
Federated IoT environments will require similar characteristics to these Telcom networks:

  1. On-Premise IoT Edge Clouds will need to be as near ‘Lights Out’ as possible. Simple to install and simple to maintain by field engineers.
  2. The elements of the Platform and the applications the Platform hosts will need to be self-configuring Field Replaceable Units (FRU’s).
  3. Software deployments and updates must be as simple and error free as possible. Back-out and re-start of applications must be intuitive and rapid.
  4. It must be simple to define secure relationships between the Services across the federated IoT environment.

Finally…

Pervasive IoT solutions will need longevity. Longevity requires evolvability. Evolvability mandates Modularity.

The Paremus Service Fabric & IoT

Pervasive IoT must be based on appropriate open Industry Standards. To address the Evolvability challenges described by DARPA’s BRASS initiative these industry standards must define the foundations for dynamic modular systems:

  1. Enforce modularity boundaries.
  2. Allow the software modules to be self-describing.
  3. Expose the dependencies between modules, and between modules and their runtime environment, in such a way that automated assembly and re-assembly by the runtime is possible.

OSGi™, the mature open Industry Standard for Java modularity, provides exactly this!

But what about the suitability of Java & OSGi?  Lets remember that Java was actually designed in the mid 1990’s for embedded set top boxes. OSGi was designed in the late 1990’s to bring modularity to these light-weight embedded JVM environments. Both technologies have been waiting for ~20 years for the market to catch up!

Today in 2015, thanks to the ongoing efforts of the OSGi Alliance, OSGi provides the standards based foundations for powerful modular, reactive, evolvable, Microservices based solutions. OSGi used in conjunction with Java JVM modularity, which is due to be introduced in the forthcoming Java 9 release, will together act as powerful accelerants / catalysts driving the adoption of pervasive evolvable IoT solutions.

However, while providing the necessary open Industry Standards based foundations, OSGi & Java they are not in themselves a solution. A runtime platform is required that can dynamically assemble modular OSGi/Java applications at the scales required. To be evolvable, scalable, maintainable, this platform must also fully leverage OSGi.

Enter the Paremus Service Fabric…

The Paremus Service Fabric is a coherent and highly modular platform, with the following runtime attributes:

  • Simplicity (Platform) – Whether Cloud Core or IoT Edge a Service Fabric can be installed with a single command. All compute resources may be treated as Field Replaceable Units (FRU).
  • Simplicity (Application) – Composite Applications are installed with a single command. These applications may then be deployed with a single command – to an embedded controller, or thousands of compute nodes.  All application dependencies are automatically managed.
  • Scaleable – A Service Fabric may run on a single RaspberryPi or across 1,000’s of data-centre compute resources. Resources available to a Fabric may be rapidly and easily expanded or contracted as required.
  • Federated – A Fabric may be aligned with a single application or factory process, or support many local applications / processes; i.e. a local  on-Premise Cloud.
  • Decoupled – When a System is installed – all software components are automatically cached on-Fabric from authorised remote repositories.
  • Reactive – The Service Fabric implements the latest OSGi specifications for Asynchronous and Stream based networking including sophisticated BackPressure and Circuit breaker behaviours. The Service Fabric, from its inception in 2005, has always supported the dynamic scaling of applications based on plugable application specific ‘Replication Handlers’.
  • Evolvable – Most importantly, applications may be rapidly upgraded or rolled-back within each Fabric – with only the modules that require upgrading being replaced within the runtime.

Unlike OSGi and Java, the Paremus Service Fabric was perhaps only 10 years too early in anticipating the requirements of pervasive IoT.

The Paremus Service Fabric will be providing the Cloud runtime for this years OSGi Community IoT Lab -see http://enroute.osgi.org/book/650-trains.html. If you are interested in how OSGi can enable the next generation of adaptive, evolvable federated Internet of Things – drop by and talk to the Paremus team and our OSGi Alliance IoT Ecosystem partners!