How to scale without lock-in

How do you scale your Spring DM or POJO applications without development framework lock-in?

Cloud centric composite applications promise to be more disruptive and more rewarding than either the move to client-server architectures in the early 1990’s, or web-services in the late 1990’s. A successful Private Cloud / Platform as a Service (PaaS) solution will provide the robust and agile foundations for an organization’s next generation of IT services.

These Cloud / PaaS runtimes will be in use for many years to come. During their lifetime they must therefore be able to host a changing ecosystem of software services, frameworks and languages.

Hence they must:

  • be able to seamlessly, and incrementally, evolve in response to changing business demands
  • at all cost, avoid locking an organization into any one specific development framework, programming language or middleware messaging product.

Want to know more? Read the new Paremus Service Fabric architecture paper which may be found here.

Cloud Computing – finally, FINALLY, someone gets it!

I’ve been really busy these last few months. So not had the time or inclination to post. Yet after reading Simon Crosby’s recent article
Whither the Venerable OS? I felt compelled to put pen to paper – or rather should that be fingers to keyboard.

Whilst a good read, the magic paragraph for me appears towards the end of Crosby’s article.

If IaaS clouds are the new server vendors, then the OS meets the server when the user runs an app in the cloud. That radically changes the business model for the OS vendor. But is the OS then simply a runtime for an application? The OS vendors would rightly quibble with that. The OS is today the locus of innovation in applications, and its rich primitives for the development and support of multi-tiered apps that span multiple servers on virtualized infrastructure is an indication of the future of the OS itself: Just as the abstraction of hardware has extended over multiple servers, so will the abstraction of the application support and runtime layers. Unlike my friends at VMware who view virtualization as the “New OS” I view the New OS as the trend toward an app isolation abstraction that is independent of hardware: the emergence of Platform as a Service.

Yes! Finally someone understands!

This is IMO exactly right, and the motivation behind the Paremus Service Fabric a path we started down in 2004!

OK, so we were a bit ahead of the industry innovation curve.

Anyway, related commentary on the internet suggests that Simon’s article validates VMwares acquisition of SpringSource. Well, I’d actually argue quite the opposite. Normal operating systems have been designed to run upon a fixed, unchanging resource landscapes; in contrast a “Cloud” operating system must be able to adapt, and must allow hosted applications to adjust, to a continuously churning set of IaaS resources. Quite simply, SpringSource do not have these capabilities in any shape or form.

However, I would disagree with the last point in Simon’s article. Having reviewed Microsoft’s Azure architecture, it seems to me no different from the plethora of Cloud/distributed ISV solutions. Microsoft’s Azure platform has a management/provisioning framework that fundamentally appears to be based on a Paxos like consensus algorithm; this no different from a variety of ISV’s that are using Apache Zookeeper as a registry / repository: All connection oriented architectures, all suffering with the same old problems!

Whilst such solutions are robust in a static environment, such approaches fail to account for the realities of complex system failures. Specifically, rather than isolated un-correlated failure events, failures in complex systems tend to be correlated and cascade! Cloud operating systems must address this fundamental reality and Microsoft are no further ahead than VMware or Google; indeed the race hasn’t even started yet! And the best defence against cascading failure in complex systems? Well that would be dynamic re-assembly driven by ‘eventual’ structural and data consistency.

Global Financial Meltdown and Google Mail Service Outage

Whilst the current global economic meltdown and the recent <a href=”http://news.bbc.co.uk/1/hi/technology/7907583.stm”>Google e-mail</a> service outage may seem entirely different types of event, there is some degree of commonality. Both represent catastrophic cascading failure within large complex distributed systems.

The analogy unfortunately finishes there.

Google were up and running again in a couple of hours whilst the worlds economies may take a decade to recover. However the central theme –  how to avoid systemic catastrophic failure within complex systems – remains of deep concern to system architects and economists alike.

Where does that leave “Cloud Computing”. Quite simply don’t believe the hype. Public Cloud infrastructures will continue to fail, hopefully infrequently, but almost certainly in a spectacular manner. The next generation for Public Cloud will need to be built upon a more modular resources landscape (swarms of geographically dispersed meshed data centre nodes) – with a suitably advanced distributed & partition-able Cloud Operating System.

Unfortunately the same is true of the current generation of Grid Provisioning and Virtualization Management Software solutions increasingly used by large corporations. Use of this technology will end in tears for a number of large IT departments. To much visible complexity, too little automation. Like the economic meltdown, these solutions fail to account for outlier risks which cause systemic failure within complex systems.

The answer? Well its not a programming language (sorry Erlang!), nor a specific piece of middleware, nor specific replication technology, nor classic clustering.


To start the journey one must first realize that…


Agility and Robustness are simply two faces of the same coin.

Impaled on the Horns of an OPEX Dilemma

Impaled on the Horns of an OPEX Dilemma

The finance industry are clearly having a tough time at present. As losses mount, CEO’s & CIO’s are increasingly scrutinizing the costs of doing business. One interesting metric, the cost of running a single production application; $1,000,000 per annum! Take the many thousands of applications typically used in a large finance house, and operational costs rapidly exceeds the billion dollar per annum mark.
Why is this?

Surely, over the last few years the Finance industry has increasingly driven down the price of enterprise software, to the point that an application server may now be charged at a few hundred dollars per node. Likewise, basic networking, server and storage are cheaper than at any time in the past.

The problem isn’t the cost of the raw materials, rather the fact that these organizations have built increasingly complex environments which must be maintained by an army of IT staff.

I’m probably not far off the mark suggesting 80% of the annual cost for each application relates to support and development staff that are required to maintain and keep the application running.

And the choices available to the CxO?
  • Use Cheaper Resource: Ship operations out to China, India or Mexico! While on-paper attractive as a quick fix; there is a catch. Wages tend to normalize as time progress, with the cost of initially cost effective workforces rising to the point that the Market will bear. Indeed – it has a name; “Free Market Dynamics”. Hence within a reasonable timeframe (~5 yrs) – the cost advantage will evaporated; meanwhile the company is still left with a complex manually intensive operational environment. Traditional – third party outsourcing – of which there are several failed examples exist in the late 1999 / early 2000 period – fall into this category. This approach does nothing to address the the root cause of the spiraling operational costs – complexity! In short – a strategy guaranteed to fail in the medium / long term.
  • Reduce the Number of Applications: If the cost relates to the number of applications – simply forcing down the number of applications in use will initially reduce OPEX costs. Whilst a reasonable strategy for some, the Financial Service industry is highly adaptive and constantly needing the evolve applications and services. Hence, a “no new” applications policy merely results in bolt-ons of additional functionality to existing systems – increasing complexity and associated costs of the remaining applications.
  • Use Technology to Reduce headcount: The IT industry have collectively failed to provide real solutions to this! Despite a flood of Automated Run-Book, Monitoring, Configuration Management, Package / OS Deployment and Virtualization Management products, humans are still very much still “in-the-loop”; directly concerned with all aspects of every software service in the runtime environment. Environments are more complex than ever!

So what is stopping the IT industry developing the right products? Simply, industry continues to fail to realize that automation of the existing is not sufficient. A fundamental/radical change in perspective with respect to how distributed systems are built and maintained is needed to address the Complexity Crisis organizations now face. Funnily enough, this is whatInfiniflow has been developed to address.

And the users of the technology?

  • The fear of change!
  • The linear relationship between status and managed headcount.
  • And most importantly, a severe shortage of sufficiently talented engineers and architects that have the vision and determination to drive such changes through their organizations – (Paremus referring to these rather special individuals as Samurai).

So if you are a frustrated Samurai, contact us at Paremus, we can introduce you to many like minded individuals 🙂

Meanwhile, if you are a CEO / CIO with the desire to tackle the root causes of your organizations IT complexity – why not drop me an e-mail, and we’ll explain how we might be able to help; specifically you may find the dramatic impact that Infiniflow has on operational cost of great interest.

Venture Capitalists embrace Command Economy in preference to Free Market!

Venture Capitalists embrace Command Economy in preference to Free Market!

A recent article Interesting Times for Distributed DataCentresby Paul Strong (Ebay – Distinguished Research Scientist ) makes a number of interesting points:

  • For Web2.0 Services to scale, you MUST back-end these onto massively horizontally scaled processing environments.
  • Most Enterprise datacentre environments are moving towards, or could be considered as, priomordial Grid type architectures.
  • What is really missing is the Data Centre MetaOperating System – to provide the resource scheduling and management functions required.

Whilst these arguments are correct, and highlight a real need, Industry & VC response seems entirely inappropriate.

Whilst VC and major Systems Vendors are happly throwing money into expounding the virtues of loosely coupled business models enabled by Web2.0 and all things WS-SOA; somewhat perplexingly, they also continue to invest in managment / virtualization / infrastructure solutions which drive tight couplings through the infrastructure stack. Examples include data centre “virtualization” or, as per my previous blog entry on the Complexity Crisis, configuration / deployment management tools.

Hence, industry investment seems to continue to favor the technology equivalent of the “command economy” in which the next generation of distributed Grid data centre is really just one more iteration on today’s; central IT organisation control/manage and allocate IT resource in a rigid hierarchical/control command structure. The whole environment is viewed as rigid system which one centrally controls at each layer of the ISO stack; approaches that continue the futile attempt to make distributed environments behave like MainFrames!

What is actually needed is a good dose of Free Market Economics!

  • Business Services dynamically compete for available resources at each point in time,
  • Resources may come and go – as they feel fit!
  • Infrastructure and Systems look after their own interests, and optimise their behaviors to ensure overall efficency within the Business Ecosystem.

Successful next generation MetaOperating Systems, will heavily leverage such principles at the core of their architectures!

You simply cannot beat an efficient Market!