Real Real-Time

Gerry Tyra - April 2015

Abstract:

Different people have different definitions of what constitutes a “Real-Time” computer system. A working definition is presented here, which will be the basis for all the discussion that follows.

Introduction:

The author has, all to often, heard the argument that frame based real time is the only, true, hard real time. Wrong. If the completion time of the calculation is a factor in the application of the outcome, then the calculation is real time.

Frame Based:

This is the classic approach to real time with its basis in Digital Signal Processing. The effective conversion from analog domain to the discretely sampled digital domain, and back, introduces a number of constraints.

The original analog signal must be defined by an upper working frequency. Any frequency components above that value need to be filtered out of the signal. Once this frequency, the Nyquist frequency, is known a sampling rate of at least twice this value can selected. Further analysis is required to determine how consistently the samples must be taken with respect to time (jitter and drift). Then the effective resolution of the sample must be determined (just because the A/D outputs 12 bits of data doesn't mean that the 12th bit has real data and not noise in it).

After some allowable latency, which is yet another analysis, the resulting data can be output at an appropriate resolution.

If the latency between input and output is greater than one sample period, or the output rate is down sampled from the input rate, the signal processing chain also has an isochronous component.

When all is said and done, if the sampling period is disrupted, either input or output, noise is introduced into the system. The impact of the noise can be anything from annoying to catastrophic in nature.

While this has been portrayed as a difficult paradigm to work in, the author finds it to be one of the simplest. With all of the constraints, there is little room to improvise or explore technical variation. It all becomes very “plug and chug”.

Isochronous Real Time:

Isochronous refers to constant rate. In this context, we are discussing the processing and transfer of data at a constant rate, plus some small variation. Consider a 48 KHz audio signal carried over a USB or IEEE 1394 interface. Both interfaces us an 8 KHz data frame. Therefore, each frame should transport 6 samples. If one frame only has 5 samples available for a transmit window, the following frame can transmit 7 samples to maintain the appropriate data transfer rate.

The source and sink of the data may have the timing requirements associated with frame based real time, but the intermediate processing can be considered isochronous if there enough latency between the input and the output.

As an example, consider a control system with a 100 Hz sample rate. At each clock tick, one sample is acquired and one previously computed value is output. But, if three frames are allowed between input and output (i.e., the output is based on 40ms old data), there must be at least one output sample in the queue, though there could be as many as three.

In this context, the surges and sags in processing and data transfer are not significant, so long as new data is absorbed, and processed data is delivered at the required rate.

Event Base:

Applying the concepts of real time processing to event triggered problem solving may well be the most difficult problem to manage. You don't know when it will happen, or, worse, you may not even know what the event actually is until you have done a lot of processing. Once you have recognized that you have a solid event, there may be very little time available before effective action has to be taken.

Having an interest in avionics, an aircraft example of this is presented.

In the current world, surface to air missiles are a concern for both military and civilian aircraft. The threat can be a anything from a man portable, visually guided system to a long range radar missile battery. In many cases, the aircraft can not detect an actual threat (e.g., a search radar is not a threat, while a tracking radar is a bad sign) until a missile is launched.

The launch is the event, and the time frame is determined by the missile's time in flight. The solution has many stages: Is there a a threat? What is it? Is it targeted at us? What are the defensive options? Start executing the best available option, while keeping track of the other options.

So, here we have a case where there is no lead in or preparation time. Rather, there is a running process that may never produce an output, until it produces a critical output. What ensues is a series of dynamically allocated functions, the number and type being determined by the perception of the time frame available. After all, a good solution is better than the best solution, if the best solution takes too long to compute.

Mixed Mode is Bad:

Running an event based system at a fixed rate automatically extend the latency of the system by an average of one half frame period per processing stage. Each delay can not be less than the task switch time associated with the incoming event, and it can be as long a full frame. If there is a long series of processing stages, as occurs when a service oriented architecture is defined at too fine a granularity, the accumulated latency can have catastrophic effects.

Summary:

A lack of a requirement for a frame based computational solution is not permission to carry out sloppy, computationally inefficient design. A non-DSP solution does not mean that the problem/solution pairing is not time critical, and violating the time constraints does not have dire consequences.