That Which Was Old Is New Again

Gerry Tyra – June 2015

Abstract:

There is an old saying that when all that you have is a hammer, everything looks like a nail. Adhering to the systems paradigm de jour for all projects is a lot like just having a hammer. We need to look further afield before committing to a specific approach.

Introduction:

My personal experience with computers dates back to the age of punch cards, I missed paper tape by a couple years. In the ensuing years, I have seen a number of different paradigms come and go, then come again. These changes are not the result of one approach being intrinsically superior than another. Rather, they are the result of shifts in the underlying technology causing a change in the underlying economies of the various paradigms. This does not make an old paradigm bad, just less cost effective for a given target market.

Quick Trip to the Server Farm:

For a working example, consider the see-sawing between local computation and server/client applications.

As a freshman in college, I spent my late nights at the key punch machines, getting the decks read, and then waiting for a printout. The environment was a single machine. Later, we moved up to terminals on a “main frame” (they weren't all created equal) in a time share mode. In spite of client/server being the paradigm in the driver's seat, a lot of people were working diligently on smaller computers both minis and micros. Hence, this paradigm lasted for a number of years until mini could file the mid-range niche and personal computers became powerful enough to do “real work” on the desk top.

At this point, local computation became the path of choice, except when a problem required the heavy duty solution in the back room. This pattern was driven by improving desk top computing capability providing a more responsive environment.

But, this success had its own cost, literally. Corporate IT departments came to recognize the cost of maintaining hardware and software in this sea of small computers. This problem was an opportunity to others, and the thin client-server paradigm became popular. This was a throw back to the client/server model, with the “dumb” terminal replaced with a “simple” PC as a smart terminal. But this environment required a server and, for its day, a high performance network. Hence, it required a dedicated and trained staff to make it work right. So, a lot of the non Fortune 500 world stayed on the desk top, perhaps with a mini in the back room.

Then a number of technologies changed, and the “Cloud” came into existence. This was driven by a number of new factors: growth of the Internet, faster networks, need for off-site backup, moving from monolithic mainframes to blade servers, and the advent of the smart phone.

In my opinion, the Cloud was driven by the advent of “smart” mobile computing, and the Cloud will be killed by the same. One of the characteristics of the first few waves of mobile computing, either phones or pads, was a lot of interface capability, but limited storage and less than spectacular computing capacity. But they did have mobile Internet access most of the time, so off loading data and heavier computing to the Cloud made sense. But as newer devices become more capable, why expend bandwidth going to the Cloud for what you can do locally? I'll admit that sharing is a different matter and a different market.

And the Point of This Was?:

Three waves of client/server technology, and these probably won't be the last. Each one was driven by different technology and market forces, but the paradigms remained. In each case there was a perceived need, and a means to satisfy that need. Somebody came up with the capital, and a marketing department was turned loose on an unsuspecting world. That “marketing department” can take on many forms, from fast talking former snake oil and used car salesmen to revered techno sages. And as long as there are quarterly bonuses, everyone with a stake in the product will swear that they have the solution to all of your problems.

The hype is about sales and share holder value of the seller. But, if you are a buyer, you have to look through the hype and glossy brochure and determine what is of value to to you. What good is snake oil if you don't own a snake? But, in fairness, if you have a dozen or so snakes, and they're starting to squeak, snake oil may be a high priority.

As I have commented in other papers, Avionics is a specialized niche market in computing. While it is still subject to the same strengths and weakness of any other computer, an Avionics bay of an aircraft is different from the server room filled with blade servers. There are different priorities and requirements. Just because the salesman just convinced you that they have the perfect solution for your server farm does not mean that the same methodology will play well on an aircraft.

Bringing This Mind Set to Avionics:

In the beginning, there was a stick and rudder pedals. It took a very long time for the underlying technology to improve to the point where a radioman and navigator were not critical crew members.

The early generations of avionics tended to be strongly federated. There was a box containing the functional hardware, with a control head as appropriate. By and large, there were radios, communications, navigation aids, radar and other sensors. Separate boxes for separate functions. With very little, if any, machine to machine communications designed in. The brain of the pilot, or a crewman, was the integrator of all data in the cockpit.

With the advances in electronics, tubes giving way to transistors, to integrated circuits to large scale integration, the functionality increasing for any given size, weight and power.

Given that I was hiding under a rock for part of the critical period, I won't try to sort out the chicken and egg problem of which events triggered which other events. But, to summarize, the “glass cockpit” drove the integration of the various control heads that had been cluttering the cockpit. About the same time, designers noticed that the general purpose processors had enough reserve computational capacity to take on most of the routine processing functionality of the aircraft. So the concept of the homogeneous avionics suite came into vogue. The current primary example is the F-35 Lightning II.

The probable apex of this concept can be seen in a Service Oriented Architecture (SOA). All of the computers are the same, so any service can run anywhere. A move toward simplicity through uniformity. At least that is the theory. As I have written elsewhere, the practice is more difficult.

A (Network) Bridge Too Far:

The design advantage of a SOA is that any functional requirement can be implemented as a service without much regard for any other service. The entire system is intended to be very loosely coupled. But loose coupling has limitations in data coherency and verifying data flow. This is assuming that the systems integrates and survives verification and validation.

SOA designs, inspired by a blade server environment, can run into trouble in a real-time embedded environment. First is the difference in virtual memory. The server farm has swap files on disk, which allow running processes to use more memory than is physically present in the system. This just doesn't happen in embedded systems. The physical RAM is what it is. When it is gone, the system crashes.

However, this isn't insurmountable by itself. If the overall system is relatively static, the system can be tested, molded to fit and retested until it can be flown safely. But, if the payloads are changed frequently the retesting process may become the limiting factor. Testing becomes a critical issue in that there is no coherent design of the overall system, only a bucket of services that collectively meet the requirements. Without that structure, there is nothing to hang a structured test methodology on. These are considerations that keep Configuration Managers and Quality Assurance Engineers awake at night.

Yet, even as the homogeneous system paradigm was gaining acceptance, it was already fracturing, in this case the F-35 again. The weight, volume and signal issues of a classic wiring harness had become a stumbling block. Enter the Remote Interface Unit (RIU). This is a small integrated device with the required real world interface on the front side, and an aircraft network interface on the back side. Now, you don't have to send a 2.6V signal to get a 5.2 degree deflection of an aileron. Just send a message commanding a 5.2 degree deflection. Using a time multiplexed network reduced RF emissions, RF susceptibility, and power loss. And, given the computer power that can be applied as part of the RIU, the RIU can do a final data validation. In this way, if a given actuator shouldn't exceed a certain deflection, any command to do so can be flagged back to the main system as an error.

Federating the Payloads:

The network binds the avionics computers together. But every network has distance and throughput limitations. Given the use of radar and imaging devices on modern aircraft, it doesn't take long for bandwidth requirements to consume the safe operating reserve that all wise network designers build in.

As expanding functionality is consuming available bandwidth, the architects and designers look for a means to reduce the load on the network. The obvious solution it to reduce the data load. Why send a haystack of raw data to the central system, and rely on the central system to find the needle?

If we have the processors available to do the tasks associated with the RIUs, why not use other “small” computers to do the front end processing at the payload. This permits the payload to find its own “needle” and pass just it to the central system.

In this way, the set of economic forces that drove us toward a centralized homogeneous system, are being overcome by a another set of technical and economic forces that push us back toward a federated system.

Secondary Cost Factors:

The concept of a homogeneous central system, remembering that it is logically centralized even if the CPU cores are distributed around the airframe, has a frequently overlooked limitation. The elephant in the room is the cost of system sizing. If the central system is responsible for all major processing functions, it has to be sized to handle the worst case load for all of the permutations of payloads and use cases. Just the analysis can be a daunting task, let alone providing a working system to do it. The system must support the union of all the processing task currently known or projected for the aircraft. If the aircraft configuration is static over a relatively long period of time, this can be a cost effective approach.

By moving the preliminary processing to the payload in a more dynamic system configuration, the central computer is only responsible for the intersection of the processing requirements.

Given a multi-role aircraft, this means that the computer weight penalty for any given payload leaves the aircraft when that payload is taken off.

There is an additional benefit during integration. The sub-system vendor can proceed in any desired manner to provide the system up to the interface with the network. As such, the vendor does not have to integrate through various resource allocation bottle necks. Rather, the vendor only provides what is required to meet the system requirements.

Summary:

From federated system, through expanding system to the current drive for homogeneous systems, avionics have evolved. However, as presented here, there are forces at work driving the pendulum back towards federated systems.

While these system won't be what our fathers knew, they won't be the computer systems suggested by salesmen trying to sell those uniform vanilla computers.