Service Granularity

Gerry Tyra - April 2015


Size does matter. But relative to what? How is the code distributed through the system? What is the inter-service communications mechanism, its bandwidth and its latency? We strive for a global optimum in light of the problem at hand.


Partitioning code is one of the most abused and misused activities in software engineering. In a single process solution, the granularity determines how the problem can be distributed among the programming staff.

However, as the solution space is spread over more processes and/or processors the global optimum for granularity can shift.

The global optimum is emphasized, as development efforts that cross departmental boundaries tend to become competitive. In such a environment, individual local optima may be pursued at the expense of the global optimum.

Too Small to be Coherent:

Regardless of what the block of code is called, or where it is located, if it has been partitioned to the point that critically related data has been separated, the granularity to too fine.

Consider an Inertial Navigation System. There are different inputs providing different data (e.g., GPS, barometric altitude, accelerometers, etc.). But the current 6 DoF data needs to be presented as a coherent entity. It is not practical, nor safe, to allow every consumer process to read the source data and compute its own navigation solution.

To Big to be Understood

If too much of the problem is brought together under a single service or process, the ability to understand the code is reduced. As the code takes on the complexity of the Gordian Knot, it becomes impossible to understand until it is cut.

And, what you can't understand, you can't build or maintain.

The Limits of Agile Development:

Agile development has an assumption hidden within it; that a small group of developers can grasp the intricacies of the problem and general outline of the solution. Only then can they move forward. For without understanding the problem and solution, how can they tell in which direction to move?

The classic waterfall paradigm and agile development are not mutually exclusive. Rather, the water fall is the macroscopic view of the problem while the agile development is the microscopic view.

Straight Line or Multidimensional Mess?:

A problem may be solved in a linear manner, think of a string of pearls where each pearls is process or service. Or it may involve a complex N-dimension mess of services, with each called service making decisions that may reroute the solution through the net on its way to the final answer. Each implementation brings with it opportunities and constraints.

The opportunities within a straight line series of processes are the obvious distribution of the code over the various nodes. This allows smaller teams to divide the problem in their attempt to conquer.

The Multidimensional Mess has the power and the weakness of its complexity. Someone has to understand, and document in detail, the routing options and conditions under which those options are taken.

In Line, To Call Or To Message?:

In the author's opinion, this decision will make or break a real time application. The simple answer is that you use all three, but what is the balance.

The in-line code will run faster, but at the cost of a large code foot print. And maintainability becomes a nightmare.

Every function call has overhead associated with it. If the function does a significant amount of work, this is a reasonable trade-off. However, executing one line and returning, is a bit ridiculous. For this reason, the author is not in favor of using setter and getters (aka accessor) to access high use data elements from a class. Why use a getter to check a boolean condition? What that boolean represents may well deserve data hiding, but the boolean itself?

Using messaging is powerful, allows the functionality to be spread over multiple processes and processors, but at a significant cost in processing and latency. Every message requires moving data through the message stack, switching to kernel space, and back, and transmission delay. Then there is the high probability of several context switches on the sending and receiving processes. This all adds up.

It's About Time:

The recurring theme in these papers has been the need for reliable real time performance. How the problem space is partitioned and then allocated to hardware resources has a fundamental impact on that performance. Very few projects are so constrained in resources that absolute optimization is required. Most programs allow for some level of compromise and even sloppiness.

But, at some point, if you break your time criteria, you have failed.


Careful management of service/process granularity is an important tool in meeting system performance requirements. Sloppy management, or arbitrarily taking services "off the shelf", may allow for excellent early project performance reports. But when the serious testing begins, the real bill is likely to come do, usually at a time when the project can least afford it.