Mission System's Hardware the KISS Way

Mission System's Hardware the KISS Way

An aircraft that does nothing but bore holes in the sky isn't very useful, unless you're the pilot and that is all you want to do.

Mission systems covers many things. Given that commercial aircraft missile defense is an ongoing topic, this discussion is not limited to high end military platforms.

Manned or unmanned, it doesn't make too much difference, the flight controls are safety critical and most other things have some play, though the exceptions are pretty extreme. So some robustness in useful.

It is further assumed that some off-board communications is required. Again, it doesn't matter if the aircraft is manned or unmanned.

So, what is needed?

First, except for "throw away" aircraft, flight control computer should be redundant, and I prefer the network to the flight controls to be SAE AS5643 (avionics grade IEEE 1394). The computer and network provide redundancy and isolation from the rest of the system. The flight control computers may be connected to the mission systems network. After all, a number of sub-systems will need current flight data. But, if it is, it will be relatively closed to protect it from unintended data on the network. If the flight computer requires off-board access, such as in a UAV, the connection would be via a virtual private network (VPN) or something equivalent.

Software for flight controls is usually considered the hardest of hard real-time software. But in its own way it is fairly simple. The operations are constrained to a well analyzed problem space, working in a time frame defined by the dynamics of the system. The computations are consistent and regular.

On the other hand, mission systems are a technical hodgepodge. Different sub-systems have different intrinsic time constants. Various sub-systems communicate, so end to end latency can become critical. So, providing adequate bandwidth and compute resources are major design considerations.

The first element is a port/starboard pair of Line Replaceable Units (LRUs). The LRUs contain one or more backplanes. The actual number is dependent on the target system. Each backplane can have one or more single board computer. And the board can have one or more processors, each with one or more core. The only real constraint is that each card should have at least three network interface connections (NICs).

Each LRU also contains a network switch (Ethernet is assumed, unless a specific application requires otherwise), each switch having its own sub net. Each switch is connected to each computer board. This provides network redundancy, with the third NIC per card being used for maintenance and debug.

As discussed in "Alternative Approach to Avionics: KISS" (long version), each backplane has a "Manager" process, server, or whatever you want to call it. The Manager's job, based on configuration data and the current system state, is to bring up a series of processes. The processes are launched on any process card that has the appropriate resources available. So, a process dedicated to maintaining serial I/O with a sub-system would only be started on a card with the appropriate I/O interface.

Given that the Managers share status with each other, a Manager going down, does not shut the system down. Rather any other Manager can take up the load and restart the missing Manager.

Any process needing to communicate with another process can query its Manager, and the Manager will respond with a data structure providing the connection data. This data would contain both the primary and secondary network connection as appropriate. In this way, while individual cards have assigned IP addresses, the processes do not have this data hard coded.

Not having the IP addresses has an additional virtue, at least in my opinion. The aircraft can be given a single, unique IP address. An incoming message for a particular application is sent to a specific known port. The router on the comm system then does port forwarding to the correct card. This simplifies the overall design by making hardware configuration and software allocation independent of the external environment. So a reconfiguration in flight, triggered by a failure or damage, or a radical hardware upgrade, is transparent to networked applications.

As a side note, this same networking philosophy can be applied to a ground station, to the same benefit.

As a final note, consider how two processes could communicate. If they are running in the same memory space, a message can be passed as a reference or pointer. If they are running in the same physical memory, but different virtual memory spaces, pointers can't be used. But, moving the data through shared memory in a native format, packed to natural word boundaries is the most efficient way transfer the data.

Once the data is to be passed over a network link, either wired or RF, the bandwidth of the network adds value to having packing mechanisms. As the bandwidth becomes more limited, using more CPU cycles to compress the data can be justified.

This philosophy is the origin of the multiple serializers provided in the sample code.

Another extension, which will be explored in greater detail, is to provide a mechanism for derived messages. Consider a message designed to control some "common" sub-system (e.g. EOIR ball, EW warning, etc.). While a lot of functionality is common, especially if scale factors are provided, but different systems from different vendors have different parameters available.

With derived messages, the base class message provides the majority of the functionality and is able to stand alone. The derived messages provide extended funtionality for one or more systems. Rather than providing for the evolution of a message type, this provides for the diversity within a type of sub-system.