Build Good Software: Of Politics and Methods

Thank you to Hope Waggoner and Mike Sassak for their kind review of this essay. It wouldn't be half what it is without their help.


I'd like to speak a word for good software systems. I would like here to discover the meaning behind "good" and put forward my idea of how we can go about achieving it within the context of which we work. I take here as inspiration Henry David Thoreau, an American philosopher. Thoreau worked in the 18th century, before my nation's Civil War. His contemporaries held him to be a crank, an idler who lived in an odd manner and did very little work. Though the first half is true I take issue with the second. Thoreau's work was Abolition, done for the most part in private on account of its illegality. We can forgive Thoreau's contemporaries for confusing him with a bean cultivating idler. It seems to me that Thoreau -- in his views on civil society, individual behavior and the influence of invention on both -- is an exceptionally important philosopher for an age of techne.

Thoreau's most influential essay is his "On the Duty of Civil Disobedience". Thoreau posits that government is a machine of sorts, mankind -- voluntarily or not -- used as the works. The start to this essay is well-known:

I heartily accept the motto, "That government is best which governs least;" and I should like to see it acted up to more rapidly and systematically. Carried out, it finally amounts to this, which also I believe, "That government is best which governs not at all;" and when men are prepared for it, that will be the kind of government which they will have.[1]

You'll find this excerpted, trotted out in defense of the "shrinking" of government or its anarchical overthrow, depending. Excerption loses the importance that Thoreau places on "when men are prepared for it."

Government is at best but an expedient.[2]

Governments are a tool, "the mode which the people have chosen to execute their will". Government, to the American notion of it, is a set of norms and laws that loosely bind a people together and to it. It is the Will of the People, viewed in the ideal American fashion, that seeks out Justice and Freedom. Yet, this is not so in practice. The flaw of Government is the flaw of the People, especially as it is "a sort of wooden gun to the people themselves".

The authority of government, even such as I am willing to submit to (...) is still an impure one: to be strictly just, it must have the sanction and consent of the governed.[3]

Government, in the Thoreauvian sense, exists to carry forward the norms of the People. It does so without examination of these norms.

But a government in which the majority rule in all cases cannot be based on justice, even as far as men understand it.[4]

The norms of a political body, in Thoreau's analysis, move forward into the future by their own means, disconnected from moral impulse. To make one's self subordinate to the political body is to make one's self subordinate to these means, to smother your own sense of right from wrong. "We should be men first, and subjects afterward," Thoreau declares. That is, we ought, as individuals, to seek out Good as we know it. To effect Good we must have the help of others, some kind of political body to pool resources and action. Mass enough people into this political body and you'll find the outlines of a Government. To that end:

I ask for, not at once no government, but at once a better government. Let every man make known what kind of government would command his respect, and that will be one step toward obtaining it.[5]

What has this got to do with software systems? Well, when we talk about ourselves, we speak of "communities". Do we not organize among ourselves to pool our resources? I see here a faint outline of a body politic. In Thoreau's spirit of making known, I would like to examine two fundamental questions in the development of software today:

  1. How do we make software that makes money?
  2. How do we make software of quality?

There is a tension here, in which tradeoffs in one reflect in the other. A dynamic balance between risk and profit and craft is at play when we cozy up to our keyboards. I wager that we all, at some point in our careers, have faced obligation to ship before completing a software project to our satisfaction. I've shipped software that I did not have complete confidence in. Worse, I've shipped software that I did not believe was safe. This, for want of testing or lived experience, driven by deadlines or a rush to be first to market. Compromise weighted with compromise. "How do we make software that makes money?" embeds the context we find our work placed in: economic models that tie the safety of our lives to the work of our hands. Every piece of software that we write -- indeed, every engineering artifact generally -- is the result of human creation. This creation is produced by the human culture that sustains and limits it, the politics of its context. What is made reflects the capability of those that made it and the intentions of those that commissioned it. The work of our hands holds a reflection of the context in which we worked.

Two instances of this come to mind, both well-studied. I'll start near to my lived experience and move out. The Bay Area Rapid Transit or BART is the light-rail train in the San Francisco Bay Area, meant,

to connect the East Bay suburban communities with the Oakland metropolis and to link all of these with San Francisco by means of the Transbay Tube under San Francisco Bay.[6]

per the Office of Technological Assessment's study "Automatic Train Control in Rail Rapid Transit". Automation, where applied to tedious, repetitive tasks, eliminates a certain class of accident. This class of accident has in common failure owing to sudden loss of focus, mistaken inputs or fatigue: these are human failures. Automation, when applied to domains needing nuanced decisions, introduces a different kind of accident: inadequate or dangerous response to unforeseen circumstance. What automation lacks is nuance; what humans lack is endurance. Recognizing this, systems that place humans in a supervisory role over many subsystems performing tedious, repetitive tasks are designed to exploit the skill of both. Automation carries out its tasks, reporting upward toward the operator who, in turn, provides guidance to the executors of the tasks. Such systems keep humans "in the loop" and are more safe that those that do not. The BART, as designed, has a heavy reliance on automation, giving human operators "no effective means" of control. The BART operator is only marginally less along for the ride than the train passengers

... except to bring the train to an emergency stop and thus degrade the performance (and perhaps the safety) of the system as a whole."[7]

The BART's supervisory board disregarded concerns with over-automation in a utopian framing common to California. It is axiomatic to this technical culture that technology is, in itself, a Good and will bring forward Good. Irrigation greens the desert, bringing fertile fields and manicured cities out of sand. The Internet spans us all, decentralizing our communication from radio, books, newspapers and TV, democratizing it in the process. Technology, the thinking goes, applied in a prompt manner and with vigor, will necessarily improve the life of the common man. Even death can obsoleted! Never mind, of course, the Salton Sea blighting the land, made for want of caution. Never mind communication re-centering on Facebook, becoming dominated again from the center but by now opaque voices. Progress is messy!

Concerns centered especially around the BART's Automatic Train Control system. The ATC controls the movement of the train, its stopping at stations, its speed and the opening of its doors. The Office of Technological Assessment study declares the ATC to be "basically unsafe". Holger Hjortsvang, an engineer for the in-construction BART, said of the ATC's specification:

[it] was weakened by unrealistic requirements . . . "terms like: 'The major control functions of the system shall be fully automatic . . . with absolute assurance of passenger and train safety, high levels of reliability . . . and 'the control system shall be based on the principles which permit the attainment of fail-safe operation in all known failure modes.' This is specifying Utopia!"[8]

To demand no realistic safety norms for a system invites a kind blindness into all involved in its construction. The technical side of the organization will tend to view the system with optimism, becoming unable to see modes of failure. The political side of the organization will devalue reports of possible failures requiring reconsideration of the said system. This is a general pattern of techno-political organizations. True to this, the Board of Supervisors devalued reports of the ATC's unreliability.

As early as 1971, the three BART employees in question became concerned with the design of the system's ATC (automatic train control). As the story unfolded, these engineers' fears eventually became public and all three were fired.

The BART manage­ment apparently felt that its three critics had jumped the gun, that the bugs in the system were in the pro­cess of being worked out, and that the three had been unethical in their release of information. [9]

Inconvenient truths are conveniently pushed aside by denying the validity of the messenger and thereby the message. The employment relationship offers an immediate method of devaluation: termination. This is huge disparity of power between employee and employer. Such disparity lends weight in favor of the "this is fine" political narrative. Yet, the system retains its reality, independent of the prevailing narrative.

... less than a month after the inauguration of service when a train ran off the end of the track at the Fremont Station. There were no fatalities and only minor injuries, but the safety of the ATC system was opened to serious question.[10]

No one at the BART set out to make a dangerous train. It happened because of the nature of the BART's governance. A techno-political organization that is not balanced in its political / technical dynamic will lurch from emergency to emergency. The reality of the underlying system will express itself. The failure of the ATC was not a disaster: no one died. But, the failure of the BART's decision making process were made open to the public in a way that it was not previous to the accident. It is very hard to hide a train that has gone off the rails. The BART supervisors wish to deliver a train to the public that voted to build it would be punished by the same public for being late. Blindness had set in and the train would be safe-enough. The engineers were not under the same pressure from the public and instead were seeking to deliver a safe train, ideally on time but late is better than dead. Once public, this imbalance in the BART was addressed through strengthening the technical staff of the BART politically and by introducing redundancies into the mechanism of the ATC. Yet, to this day, the BART remains a flawed system. Failures are common, limiting efficient service during peak hours. No one dies, of course, which is the important thing. An introspective organization -- like the BART -- will recognize its flawed balance and set out to correct itself. Such organizations seek a common understanding between its technical and political identities, even if the balance ultimately remains weighted toward the political end. The technical is "in the loop": the nexus of control is not wholly in the political domain. In a perfect world this balance would exist from the outset and be reflected in the technical system. But, late is better than dead.

Organizations which achieve some balance between the technical and political are the ideal. Such organizations allow the underlying technical system to express its real nature. That is, the resulting system will only be as safe as its design allows it to be. Every system carries in its design a set of inevitable accidents. This is the central thesis of Charles Perrow's "Normal Accidents: Living with High-Risk Technologies".

The odd term normal accident is meant to signal that, given the system characteristics, multiple and unexpected interactions of failure are inevitable. This is an expression of an integral characteristic of the system, not a statement of frequency. It is normal for us to die, but we only do it once. System accidents are uncommon, even rare; yet this is not all that reassuring, if they can produce catastrophes.[11]

The "system characteristics" Perrow mentions are quite simple: interactive complexity and tight coupling between system components. Every system viewed with omniscience is comprehensible, in time. No one person is omniscient, of course, and operators must make due with a simplified model of their system. System models are constructed of real-time telemetry, prior domain expertise and lived experience. They are partially conceived in the design stage of the system and partially a response to the system as it is discovered to be. Ideally, models capture the important characteristics of the system, allowing the operator to build an accurate mental model of the running system. The accuracy of this mental model determines the predictability of the system by a given operator. It is by prediction, and prediction alone, that we interact with and control the things we build. Mental models for simple systems -- say, a dipping bird hitting a button on a control panel -- are straightforward. Consider only the dipping bird and the button and we have high confidence in predictions made about the system. Consider also the system under the control of our dipping bird -- say, a small-town nuclear power plant -- and our predictive confidence drops. Why? The power plant is complex. It is a composition of many smaller subsystems interacting semi-independently from one another. The subsystems, individually, demand specialized and distinct knowledge to comprehend. The interactions between subsystems demand greater levels of knowledge to comprehend and, worse, may not have been adequately explored by the designers. Linear interactions -- where one subsystem affects the next which affects the next -- are ideal: they are straightforward to design and reason about. Linear interactions predominate in well-designed systems. Non-linear interactions often cannot be avoided.

[T]hese kinds of interactions [are] complex interactions suggesting that there are branching paths, feedback loops, jumps from one linear sequence to another because of proximity (...) The connections are not only adjacent, serial ones, but can multiply as other parts or units or subsystems are reached.[12]

Or, more succinctly:

Linear interactions are those in expected and familiar production or maintenance sequence, and those that are quite visible even if unplanned.
Complex interactions are those of unfamiliar sequences, or unplanned and unexpected sequences, and either not visible or immediately comprehensible.[13]

Of note here is the implicit characteristic of unknowing. Complex interactions are not designed but are an emergent system property with unknown behavior. Complex interactions in a system restrict the human operators' ability to predict the system's reaction in a given circumstance. Of importance is the coupling between subsystems, a familiar concept in the construction of software.

Loose coupling (...) allows certain parts of the system to express themselves according to their own logic or interests. (...) Loosely coupled systems, whether for good or ill, can incorporate shocks and failures and pressures for change without destabilization.[14]

Loosely coupled subsystems are not independent but have a tolerance for error that tightly coupled subsystems do not. Interdependence around time, invariant sequencing and strict precision in interaction make for tightly coupled subsystems. Coupling has great importance for recovery from failure.

In tightly coupled systems the buffers and redundancies and substitutions must be designed in; they must be thought of in advance. In loosely coupled systems there is a better chance that expedient, spur-of-the-moment buffers and redundancies and substations can be found, even though they were not planned ahead of time.[15]

It is not possible to plan for every failure a system will encounter. A well-designed system will make the probability of a system accident low, but that is the best that can be done. A complex system is one in which no one person can have a perfect mental model of said system. Complex systems are not necessarily a function of bad design. Rather, they are complex because they address some complex social need: power generation, control of financial transactions, logistics. Complex systems are an artifact of a political decision to fund and carry out the construction of some solution to a perceived need. They are what C. West Churchman called solutions to "wicked problems":

(...) social problems which are ill formulated, where the information is confusing, where there are many clients and decision-makers with conflicting values, and where the ramifications in the whole system are thoroughly confusing. (...) The adjective ‘wicked' is supposed to describe the mischievous and even evil quality of these problems, where proposed ‘solutions' often turn out to be worse than the symptoms.[16]

This brings us around to the second example of the political context of a system affecting its operation. The Reaktor Bolshoy Moshchnosti Kanalnyy (RBMK) nuclear reactor is a Soviet design, intended to use normal water and graphite control rods to moderate a natural uranium fission reaction. This design is cheap -- explaining why many were built -- but suffers from a serious defect: the reactor requires active cooling. Without power, unless otherwise specially prepared, the reactor enters a positive void coefficient feedback loop. The water moderator heats, flashes into steam, lowering the moderation of the reaction. This flashes more water into steam, further lowering the moderation. This cycle continues until the reactor containment vessel is breached.

The most famous RBMK reactor is no. 4 in the Chernobyl complex. This reactor exploded on 26 April 1986, having been driven by a combination of political demand and operator action into an explosive feedback loop. The accident occurred during an experiment into the generation of electricity for the purposes of emergency cooling. As Grigori Medvedev explains in his "The Truth About Chernobyl":

If all power is cut off to the equipment in a nuclear power station, as can happen in normal operations, all machinery stops, including the pumps that feed cooling water through the reactor core. The resulting meltdown of the core is a nuclear accident of the utmost gravity.

As electricity must be generated by any means available in such circumstances, the experiment using the residual inert force of the turbine is an attempt to provide a solution. As long as the turbine blades continue to spin, electricity is generated. It can and must be used in critical situations.[17]

The techno-political administration responsible for drawing up output plans for generation facilities in the Soviet Union were "staffed by some well-trained and experienced people" but key decision makers were people working in an unfamiliar field after "prestige, money, convenience."

Yu. A. Izmailov, a veteran of Glavatomenergo, the central directorate for nuclear power, used to joke about it: "Under Veretennikov it was practically impossible for us to find anyone in the central directorate who knew much about reactors and nuclear physics. At the same time, however, the bookkeeping, supply and planning department grew to an incredible size."[18]

The Chernobyl facility and other generation facilities were administered with a combination of sloppy ineptitude or by those cowed into silence for fear of being replaced by more compliant technicians. The experimental program drawn up for the Chernobyl no. 4 test was done not with an eye toward safety but toward political success. A successful test of the inertial spin-down method would indicate the superior management the daring and proper spirit of the plant's chief engineer, M. N. Fomin. The experimental program intentionally switched off safety systems prior to engaging the test to give "pure" results. From Medvedev:

  • The protection systems triggered by the preset water levels and steam pressure in the drum-separators were blocked, in an attempt to proceed with the test despite the unstable condition of the reactor; the reactor protection system based on heat parameters was cut off.
  • The MPA protection system, for the maximum design-basis accident, was switched off, in an attempt to avoid spurious triggering of the ECCS during the test, thereby making it impossible to limit the scope of the probable accident.
  • Both emergency diesel-generators were blocked, together with the operating and start-up/standby transformers, thus disconnecting the unit from the grid... [19]

The RBMK reactor was designed to fill a planned need for cheap electricity and the compromises inherent in its design to achieve this aim were irreparable. There is trouble with unsafe systems made "safe" by augmentation, rather than fundamental redesign. Operators can, at their discretion or through coercion, disable safety devices. Perrow notes in Normal Accidents: "Safety systems (...) are necessary, but they have the potential for deception. (...) Any part of the system might be interacting with the other parts in unanticipated ways." A spacecraft's emergency escape system may be accidentally triggered by an elbow, say. Or, a software threshold alarm might fire during a promotion due to increased customer demand but lead operators, unaware of the promotion, to throttle traffic. Procedures go out of date or are poorly written from the start. From David E. Hoffman's "The Dead Hand":

One (Chernobyl) operator (...) was confused by the logbook (on the evening before the 26 April experiment). He called someone else to inquire.

"What shall I do?" he asked. "in the program there are instructions of what to do, and then a lot of things crossed out."

The other person thought for a minute, then replied, "Follow the crossed out instructions."[20]

The BART ATC, though flawed, was made safe by incorporating non-negotiable redundancies into its mechanism. Such an approach cannot be taken with the RBMK reactor. Correction of its flaws requires fundamental redesign of the type. Such systems persist only so long as the balance between technical and political is held and even then this is no guarantee that a low probability event will not occur. Chernobyl had no such balance.

Perrow advocates for a technical society which will refuse to build systems whose catastrophic risk is deemed too high. This is admirable but, I believe, ultimately unworkable given the employment issue discussed above, in addition to an implied separation between political and technical aims which does not exist in practice. When asked to construct something which, according to political constraints, will not be fit for purpose I might choose to refuse but someone else may not. My own hands being clean does not mean good has been done. More, the examples given above are outsized in their scope -- a faulty train for a metro area, a nuclear volcano -- and the implications of their failure are likely beyond the scope of what most software engineers work on. Private concern for the fitness of some small system might be kept private with the perception that its impact will be limited. There are also development ideologies that stress do now, think later approaches, most typified by the mantra "Move Fast and Break Things". These objections are valid in the small but contribute to a slow-motion disaster in aggregate. Consider how many legacy software systems there are in the world which are finicky, perform their function poorly and waste the time of users by crashing. How many schemes are made to replace such systems -- to finally do things right -- only for this aim to be frustrated by "temporary" hacks, tests that will come Real Soon Now or documentation that will never come? What's missing here is a feeling for what Hans Jonas in his "Imperative of Responsibility" called the "altered nature of human action":

All previous ethics (...) had these interconnected tacit premises in common: that the human condition, determined by the nature of man and the nature of things, was given once and for all; that the human good on that basis was readily determinable; and that the range of human actions and therefore responsibility was narrowly circumscribed. [21]

Jonas argues that the tacit premise of human action existing in an inviolable world has been broken by the effective scale of modern technology. Humanity -- able to remake its environment on a lasting, global scale -- has made inadequate existing ethics, ethics that measure action having no temporal component.

(T)echnological power has turned what used and ought to be tentative, perhaps enlightening plays of speculative reasoning into competing blueprints for projects, and in choosing between them we have to choose between extremes of remote effects. (...) In consequence of the inevitably "utopian" scale of modern technology, the salutary gap between everyday and ultimate issues, between occasions for common prudence and occasions for illuminated wisdom, is steadily closing. Living now constantly in the shadow of unwanted, built-in, automatic utopianism we are constantly confronted with issues whose positive choice requires supreme wisdom -- an impossible situation for man in general, because he does not possess that wisdom (...) We need wisdom the most when we believe in it the least.[22]

Jonas' concern is with the global environment and the looming disaster coming with regard to such. "Mankind Has No Right to Suicide" and "The Existence of ‘Man' Must Never Be Put at Stake" are eye-catching section titles. Jonas concludes that the present generation has an imperative responsibility to ensure to next generation's existence at no less a state than we enjoy without forfeiting said future existence. It is a detailed argument and well worth reading. Of interest to this essay is the association by Jonas of progress with "Baconian utopianism" as well as the logical framework that Jonas constructs to reach his ultimate conclusion. Progress is an ideal so deeply embedded in our society that it's axiomatic. Progress is broadly understood as an individual process, discussed on individual terms. The individual strives to discern knowledge from wisdom, to act with justice. These individual aims are then reflected into cooperative action but cooperative action will, necessarily, be tainted by those that lack wisdom or do not hope for justice. Thoreau's thoughts sit comfortably here. In Thoreau there is also a broader sense of "progress", as elsewhere in post-Enlightenment Western thought.

While there is hardly a civilization anywhere and at any time which does not, or did not, speak of individual progress on paths of personal improvement, for example, in wisdom and virtue, it seems to be a special trait of modern Western man to think of progress preeminently as an attribute -- actual or potential -- of the collective-public realm: which means endowing this macrodimension with its transgenerational continuity with the capacity, the disposition, even the inbuilt destination to be the substratum of that form of change we call progress. (...) The connection is intriguing: with the judgement that the general sense of past change was upward and toward net improvement, there goes the faith that this direction is inherent in the dynamics of the process, thus bound to persist in the future -- and at the same time a commitment to this same persistence, to promoting it as a goal of human endeavor. [23]

That this becomes bound up with technological progress should be no surprise. Especially once the Industrial Revolution was well-established most new technology enabled greater and greater material comfort. Technology became associated with the means toward Progress, with Progress in itself. Note the vigorous self-congratulation of the early industrialist or the present self-congratulation of the software engineer. Marxist thought counters that this Progress is only true if you are able to afford it and it's hard to disagree. Of note is that Marxist thought does not reject the connection of technology with progress but contests that it is the most efficient political / economic system to bring about Technological Progress. In Jonas' analysis the ultimate failing of this ideal of progress is that, while comfort may be gained, it comes at the expense of the hope for, not only the comfort of, but the existence of future generations: successes becomes greater and greater but the failures, likewise, grow in scope. Our ethics are unable to cope with these works.

(T)hese are not undertaken to preserve what exists or to alleviate what is unbearable, but rather to continually improve what has already been achieved, in other words, for progress, which at its most ambitious aims at bringing about an earthly paradise. It and its works stand therefore under the aegis of arrogance rather than necessity...[24]

Existing ethics are "presentist": that is, they are concerned with actions in the present moment between those who are now present. In such an ethics, it is no less morally laudable to sacrifice in the present for the well-being of the future than it is to sacrifice the well-being of the future for the present. What Jonas attempted to construct was an ethics which had in itself a notion of responsibility toward the future of mankind as a whole. The success of the project is, I think, apparent in the modern sense of conservation that pervades our thinking about pollution and its impact on the biosphere. The failure of the project is also apparent.

Jonas' logical framework is independent of the scope of his ultimate aim. This framework is intuitively familiar to working engineers, living, as we do, with actions which though applied today will not come to fruition for quite some time.

(D)evelopments set in motion by technological acts with short-term aims tend to make themselves undefended, that is, to gather their own compulsive dynamics, an automotive momentum, by which they become not only, as pointed out, irreversible but also forward-pushing and thus overtake the wishes and plans of the initiators. The motion once begun takes the law of action out of our hands (...)[25]

Every technical construction, as we have established, is some reflection of the political process that commissioned it. This technical artifact will go forward into the future, eclipsing the context in which it was made and influencing the contexts that are to come. The BART of the 1960s was intended to run infrequently -- for morning and evening commutes -- and to service lower middle income areas. The BART of the present period runs for twenty hours a day and the areas around its stations have become very desirable, attracting higher income residents and businesses. The Chernobyl disaster, most immediately, destroyed the planned city of Pripyat but has left a centuries long containment and cleanup project for Europe. No technology is without consequence. We see this most clearly in software with regard to automating jobs that are presently done by people. Whole classes of work which once gave means to millions -- certain kinds of clerical work, logistics, manufacturing -- have gone with no clear replacement. Such "creative destruction" seems only the natural order of things -- as perhaps it is -- but it must be said that it likely does not seem so natural if you are made to sit among the destruction.

What is wanted is some way of making software well. This has two meanings. Let's remind ourselves of the two questions this essay set out to address:

  1. How do we make software that makes money?
  2. How do we make software of quality?

In the first sense of "well" we treat with the first question. Restated, we wish to make software whose unknown behaviors are limited so that we can demonstrate fitness for purpose and be rewarded for our labors. In the second sense of "well" we treat with the second question. What we wish to make is software whose unknown consequences are limited. This later sense is a much more difficult.

How do we restrict unknown behavior in our software? Per Perrow there will always be such and I believe that we must look, today, at those working in high-criticality software systems for some clue of the way forward. I think it will be no controversial thing to say that most software systems made today are not as good as they could be. Even with great personal effort and elaborate software development rituals -- red/green testing, agile, scrum, mob, pair, many eyes make all bugs shallow, benevolent dictatorship, RUP, RAD, DSDM, AUP, DAD etc etc -- most software is still subpar. Why? In 1981 STS-1 -- the first flight of the Space Transport System, as the Space Shuttle was officially known -- was stalled on the launch pad for want of a computer synchronization. Per John Gorman in "The 'BUG' Heard 'Round the World":

On April 10, 1981, about 20 minutes prior to the scheduled launching of the first flight of America's Space Transportation System, astronauts and technicians attempted to initialize the software system which "backs-up" the quad-redundant primary software system ...... and could not. In fact, there was no possible way, it turns out, that the BFS (Backup Flight Control System) in the fifth onboard computer could have been initialized properly with the PASS (Primary Avionics Software System) already executing in the other four computers. There was a "bug" - a very small, very improbable, very intricate, and very old mistake in the initialization logic of the PASS. [26]

The Shuttle computer system is an oddity, a demonstration of a technique called "N-version" programming that is no longer in use. The Shuttle was a fly-by-wire craft, an arrangement where the inputs of the pilot are moderated by the computer before being sent to control surfaces or reaction thrusters. The PASS was a cluster of four identical computers running identical software. Each PASS computer controlled a "string" of avionics equipment with some redundancy. Most equipment received the coverage of a partial subset of the PASS computers where very important equipment, like main engines, would receive four-way coverage. The PASS performed their computations independently but compared the results among one another. This was done to control defects in the computer hardware: were a disagreement to be found in the results the computers -- or, manually, the pilot -- could vote a machine out of control over its string, assuming it to be defective. Simultaneously to all this the BSF received the same inputs and performed its own computations. The BFS computer used identical hardware to the PASS, was constructed in the same HAL/S programming language but ran software of independent construction on a distinct operating system. This is N-version programming. The hope was that software constructed by different groups would prove to be defective in distinct ways, averting potential crisis owing to software defect.

There are five onboard computers (called "GPC's" by everyone - with few remembering that they really were "general purpose") -- four operate with identical software loads during critical phases. That approach is excellent for computer or related hardware failures - but it doesn't fit the bill if one admits to the possibility of catastrophic software bugs ("the bug" of this article certainly is not in that class). The thought of such a bug "bringing down" four otherwise perfect computer systems simultaneously and instantly converting the Orbiter to an inert mass of tiles, wires, and airframe in the middle of a highly dynamic flight phase was more than the project could bear. So, in 1976, the concept of placing an alternate software load in the fifth GPC, an otherwise identical component of the avionics system, was born.[27]

The PASS was asynchronous, the four computers kept in-sync by continually exchanging sync codes with one another during operation, losing an effective 6% of operating capacity but gaining loose coupling between systems that, conceptually, should be tightly coupled on time to one another. The BFS was a synchronous time-slotted system wherein processes are given pre-defined durations in which they will run. Synchronizing asynchronous and synchronous machines is a notoriously hard problem to solve and the shuttle system did so by building in compromises to the PASS, requiring it to emulate synchronicity its its high-priority processes in order to accommodate the BFS.

The changes to the PASS to accommodate BFS happened during the final and very difficult stages of development of the multi-computer software.[28]

In order for the BFS to initially synchronize with the PASS it must calculate the precise moment to listen on the same bus as the PASS. That the computers' clocks were identically driven made staying in sync somewhat easier though this does nothing to address initial synchronization at startup. The solution taken by the BFS programmers was to calculate the offset of the current time from the time when the next sync between PASS and BFS was to occur and simply wait. The number of cycles for this calculation was known and, therefore, the time to wait could be made to take into account the time to compute the time to wait. Excepting that, sometimes, in rare circumstances the timing would be slightly off and the sync would remain one clock cycle off. Resolution required both the PASS and BFS to be power-cycled. Once the sync was achieved it was retained and on 12 April 1981 John Young and Robert Crippen flew the space shuttle Columbia into low-earth orbit.

Another subsystem, especially one as intricately woven into the fabric of the avionics as is the BFS, carries with it solutions to some problems, but the creation of others. While it certainly increased the reliability of the system with respect to generic software failures, it is still argued academically within the project whether the net reliability is any higher today than it would have been had the PASS evolved to maturity without the presence of its cousin - either as a complicating factor...or a crutch. On the other hand, almost everyone involved in the PASS-side "feels" a lot more comfortable![29]

In 1986 John C. Knight and Nancy G. Leveson published "An Experimental Evaluation of the Assumption of Independence in Multiversion Programming". N-version software, as mentioned, was assumed to be more effective at reducing the risk from software bugs in critical systems not by removing them but by making them different. This assumption drove increased complication into the shuttle flight computer system, delaying the initial flight but also the operation of the shuttle as well as the maintenance of the flight system going forward through the lifetime of the shuttle.

The great benefit that N-version programming is intended to provide is a substantial improvement in reliability. (...) We are concerned that this assumption might be false. Our intuition indicates that when solving a difficult intellectual problem (such as writing a computer program), people tend to make the same mistakes (for example, incorrect treatment of boundary conditions) even when they are working independently. (...) It is interesting to note that, even in mechanical systems where redundancy is an important technique for achieving fault tolerance, common design faults are a source of serious problems. An aircraft crashed recently because of a common vibration mode that adversely affected all three parts of a triply redundant system.[30]

Knight and Leveson's experiment to check this assumption is delightfully simple. Graduate students at the University of Virginia and the University of California at Irvine were asked to write a program from a common specification and this program was subjected to one million randomly generated test cases. The programmers were given common acceptance tests to check their programs against but were not given access to the randomly generated test cases. Once submitted

(o)f the twenty seven, no failures were recorded by six versions and the remainder were successful on more than 99% of the tests. Twenty three of the twenty seven were successful on more than 99.9% of the tests.[31]

Knight and Leveson examined the failing cases and determined that they tended to cluster around the same logic mistakes. For example

The first example involves the comparison of angles. In a number of cases, the specifications require that angles be computed and compared. As with all comparisons of real quantities, the limited precision real comparison function was to be used in these cases. The fault was the assumption that comparison of the cosines of angles is equivalent to comparison of the angles. With arbitrary precision this is a correct assumption of course but for this application it is not since finite precision floating point arithmetic was used and the precision was limited further for comparison.[32]

That is, the independently programmed systems displayed correlated failures. Further correlated failures were the result of misunderstandings in geometry, and of trigonometry.

For the particular problem that was programmed for this experiment, we conclude that the assumption of independence of errors that is fundamental to the analysis of N-version programming does not hold.[33]

The PASS' software is compromised, not for want of care in its construction as NASA used the the best available understanding of how to assemble a reliable software system at the time of construction but because this understanding proved to be defective. The shuttle was forever more complicated than it should otherwise have been, adding to the difficulty of its operation and to the expense of its maintenance. There are surely many systems which, now in the world, were constructed with the best of intentions, with the best of available knowledge, but are compromised in a similar manner. Recall that safety systems are not, themselves, independent but become a part of the system, interactive with the existing components in ways potentially unanticipated. In the design and constructions of systems we must strive to limit complexity, must push back on its inclusion, especially late in a project. The more simple a system is the more capable we'll be in predicting its behavior, of controlling its failures.

The shuttle computer system is an example of a technical system in which the techno-political organization had a great degree of balance in its technical and political sub-cultures. (The Shuttle itself was not so, being made too heavy in order to accommodate DoD payloads with an eye toward "paying for itself" through flights. This constraint was added by the US Congress and exacerbated by unrealistic flight rates put forward by NASA. This topic is outside the scope of this essay, but "Into the Black" by Rowland White is a fine book, as well as "Space Shuttle Legacy: How We Did It and What We Learned" by Launius, Krige and Craig). We have seen what happens to technical systems where this is not so: fitness for purpose is compromised for expedience along political lines. The PASS / BFS sync issue as well as our lived experience should give a sense that the opposite is not true: a perfect balance between technical and political sub-cultures will not produce a defect free system. In such cases where do defects creep in and why? Robyn R. Lutz' "Analyzing Software Requirements Errors in Safety Critical Embedded Systems" is of particular interest.

This paper examines 387 software errors uncovered during integration and system testing of two spacecraft, Voyager and Galileo. A software error is defined to be a software related discrepancy between a computed observed or measured value or condition and the true specified or theoretically correct value or condition.[34]

The Voyager probes were launched in 1977 to study the outer solar system, consisting of imaging equipment, radio transceivers and other miscellaneous equipment common to spacecraft. Galileo was a later spacecraft, launched in 1989 and as such is a more capable scientific instrument meant to study Jupiter and its moons but, broadly, is not dissimilar to Voyager for our purposes here. The software of both devices was safety-critical, in that the computer system monitored and controlled device equipment which, if misused, would cause the loss of the device. Every precaution was taken in the construction of the probes as both technical and political incentives aligned toward achieving the greatest safety. This is a common feature of techno-political organizations when the system at the core of the organization is perceived both to be very important and to be very sensitive to failure. Because of this great precaution Lutz' was able to catalog each defect identified by the development teams, breaking them down into sub-categories. The spread of time between the Voyager and Galileo projects gives confidence that the results are generally applicable.

Safety related software errors account for 56% of the total software errors for Voyager and 48% of the total software errors observed for Galileo during integration and system testing.[35]

The kinds of errors discovered are of great interest to us here. Broadly they are broken down into three schemes:

  • Internal Faults
  • Interface Faults
  • Functional Faults

Internal Faults are coding errors internal to "a software module". A function that returns the wrong result for some input, an object that mis-manages its internal state are all examples of such. Lutz notes that there are so few of these that the paper does not address them. That these are basically non-existent by the stage of integration and system testing is a testament to the effectiveness of code review and careful reasoning. It is also indicative of the notion that simply testing small units of a larger project is insufficient for catching serious errors in software systems. The combination of systems is not necessarily well-understood, though the individual systems might be. Interface Faults are the unexpected interaction of systems along their interaction pathways, their interfaces. In software systems this is the transfer or transformation of data between components or the incorrect transfer of control from one path to another. Functional Faults are missing or unnecessary operations, incorrect handling of conditions or behavior that does not conform to requirements.

At a high level of detail, safety-related and non-safety-related software errors display similar proportions of interface and functional faults. Functional faults (...) are the most common kind of software error. Behavioral faults account for about half of all the functional faults on both spacecraft (52% on Voyager; 47% on Galileo). (...) The analysis also identifies interface faults (...) as a significant problem (36% of the safety-related program faults on Voyager; 19% on Galileo).[36]

Lutz' analysis goes on to demonstrate that of the interface faults the majority of these in both projects -- 93% Voyager, 72% Galileo -- are due to communication errors between engineering teams. That the majority failing of software in these projects is due to incorrect functionality per the specification -- ambiguities in or a misinterpretation of said specification -- puts human communication as the primary defect source in an ideal techno-political organization. The root problem here is ambiguity in human speech, either in written specification or agreements between peers with regard to the action of computers. This being a dire problem, recognized relatively early in the field of software engineering, a body of work has gone in toward its resolution. The most obvious approach is to remove the ambiguity. That is, if we could but produce a document which would unambiguously declare the behavior of the system we wished to have on hand -- as well as all the necessary sub machines -- then we would do a great deal toward removing the primary source of defects. This notion is very much of the formalist school of mathematics and suffers from the same defect. Namely, unambiguous specification is a monstrously complicated undertaking, far harder than you might think at first. The most generally useful formal specification language today is Z, pronounced Zed. Z does not have wide use and the literature around its use is infamous for applying Z to simplistic examples. Jonathan Bowden wrote "Formal Specification and Documentation Using Z: A Case Study Approach" to remedy this, noting in his introduction that:

The formal methods community has, in writing about the use of discrete mathematics for system specification, committed a number of serious errors. The main one is to concentrate on problems which are too small, for example it has elevated the stack to a level of importance not dreamt of by its inventors.[37]

Bowden's work is excellent -- chapter 9 "The Transputer Instruction Set" is especially fun -- but reading the book you cannot help but feel a certain hopelessness. This is the same hopelessness that creeps in when discussing dependently typed languages or proof tools with extractive programming. Namely, these tools are exceptionally technical, requiring dedicated study by practitioners to be used. It seems hopeless to expect that the political side of a techno-political organization will be able or willing to use formal specification tools, excepting in exceptional circumstances. This is not to say that such tools are not valuable -- they are even if only today as an avenue of research -- but that they lack a political practicality, excepting in, again, exceptional circumstances. What does have political practicality is "testing" as such, understood broadly to be necessary for the construction of software. Existing methodology focuses on hand-constructed case testing of smallish units of systems as well as hand-constructed case testing of full systems. Design -- like domain-driven -- methodologies are also increasingly understood to be a good for the removal unspoken assumptions in specifications. This is excellent. Where the current dominate testing culture fails are in the same areas as above, in input boundary conditions, in unexpected interaction between components, in unexplored paths. Testers are often the same individuals as those who wrote the initial system. They are biased toward the success of the system, it being of their own devise. It takes a great deal more than most have to seek out the failings in something of emotional worth. That is, manually constructed test cases often test a ‘happy path' in the system because that is what is believed most likely to occur in practice and because imagining cases outside of that path are difficult.

In the same spirit of formal specification, approaches to the challenge of creating effective test cases centered on efficient exhaustive testing. Black-box testing -- where test inputs are derived from knowledge of interfaces only -- and white-box (or structural) testing -- where test inputs are derived similarly to black-box in addition to personal knowledge of the software's internal structure -- are still employed today. Defining the domain of a program's input and applying it in a constrained amount of time is the great challenge to both approaches. Equivalence partitioning of inputs cuts down on the runtime issue but defining equivalence effectively is a large, potentially error-prone task in itself. Pre-work for the purposes of testing is a negative with respect to political practicality. The underlying assumption is the need for exhaustiveness to make the probability of detecting faults in software high. Joe E. Duran and Simeon C. Ntafos' 1984 paper "An Evaluation of Random Testing" determined that this assumption does not hold. Their method is straightforward. Duran and Ntafos took a series of programs common in the testing literature of the time and produced tests using then best-practice methods, as well as tests which simply generated random instances from the domain of input. Their results showed that randomized testing "performed better than branch testing in four and better than required pairs testing in one program" but "was least effective in two triangle classification programs where equal values for two or three of the sides of the triangle are important but difficult to generate randomly."

The results compiled so far indicate that random testing can be cost effective for many programs. Also, random testing allows one to obtain sound reliability estimates. Our experiments have shown that random testing can discover some relatively subtle errors without a great deal of effort. We can also report that for the programs so far considered, the sets of random test cases which have been generated provide very high segment and branch coverage.[38]

Key is that both sub-cultures in a techno-political organization have their needs met. The technical side of the organization is able to achieve high-confidence that the potential state-space of the system under test has been explored and that with very little development effort. The political side of the organization receives the same high assurance but, again, with very little effort, that is cost. Tooling has improved in recent years, making randomized testing more attractive as a compliment to existing special-case testing: property testing -- introduced in Hughes and Claessen's "QuickCheck: A Lightweight Tool for Random Testing of Haskell Programs" -- libraries are common in mainstream languages and automatic white-box testing tools like American Fuzzy Lop are quick to set up and reap immediate benefits. As Duran et al. note "the point of doing the work of partition testing is to find errors," and this is true of test methods in general. Restated, the point of testing is to uncover unexpected behavior, whether introduced through ambiguity or accident. Randomized testing is a brute force solution, one that can be effectively applied without specialized technique -- though property testing can require a fair bit of model building, as noted in Hughes' follow-up papers -- and at all levels of the software system. Such an approach can probe unexpected states and detect the results of ambiguity in human communication, being limited in the scope of the environment for simulation of the system under test.

The limit to probing for ambiguity will come in the limitation of the engineer to the construction of a simulation environment and the impatience of the political sub-culture to avoid its construction. We've come now back to our second question, that of construction software whose unknown consequences are limited. In no small sense we, the technical side of the techno-political organization, must understand that the consequences of a system cannot be understood if its behaviors are not. It is worth keeping in mind that a consequence will be of supreme importance to the political organization: existence, whether for profit or for the satisfaction of a constituency. Leveson's "The Role of Software in Spacecraft Accidents" is comprehensive, of interest to this essay is the first flight of the Ariane 5. This flight ended forty seconds after it began with the spectacular explosion of the rocket.

The accident report describes what they called the "primary cause" as the complete loss of guidance and attitude information 37s after start of the main engine ignition sequence (30 seconds after liftoff). The loss of information was due to specification and design errors in the software of the inertial reference system. The software was reused from the Ariane 4 and included functions that were not needed for Ariane 5 but were left in for "commonality."[39]

Why, when Ariane 4 had been such a successful launch system, was its successor's guidance system knocked together?

Success is ironically one of the progenitors of accidents when it leads to overconfidence and cutting corners or making tradeoffs that increase risk. This phenomenon is not new, and it is extremely difficult to counter when it enters the engineering culture in an organization. Complacency is the root cause of most of the other accident factors described in this paper and was exhibited in all the accidents studied. (...) The Ariane 5 accident report notes that software was assumed to be correct until it was shown to be faulty. As noted by the Ariane accident investigation board, the opposite assumption is more realistic.[40]

More damning,

While management may express their concern for safety and mission risks, true priorities are shown during resource allocation. (...) A culture of denial arises in which any evidence of significant risk is dismissed.[41]

Leveson's specific focus in "The Role of Software" is on the failure of management to contain risk in the destruction of safety-critical systems. If we set our minds to speak of consequence then this applies equally well to us as we are those who now make the world what it will be. I mean this not in a self-congratulatory "software is eating the world" sense but in the more modest sense that everyone now living, through their action or inaction, effects some change on what is to come. The discipline of engineering is special, if not unique, for the construction of artifacts that will be carried forward into the future, bringing with them the unarticulated assumptions of the present. Consider the "multi-core crisis" where an assumption of sequential machines met unavoidably with a world of superscalar, multi-level cached, multi-core machines. Algorithms developed in the sequential era continue to work when the machines have been fundamentally redesigned but are no longer necessarily optimal, requiring a rewrite of existing software and a retraining of existing thought to meet with present machines on their own terms. This imposes an inefficiency burden in a personal and non-personal sense. Consider as well the trend toward non-binary gender expression where it meets with software constructed in an era assuming binary gender. Whether intended now or not this enforces a gendering scheme from the past onto the present. Adapting such systems demands work from those whose gender identity does not conform to the old model -- convincing those unaffected is a great labor; the minority is often made to bear the disproportionate weight of the work for equality, to put it mildly -- and from those who steward such systems.

Where Jonas contends that technology tends to gather up its own momentum and "overtake the wishes and plans of the initiators" we here further contend that if we view ourselves not as the beginning of some future or the end of some past but a people in the middle of a future and the past then the weight of the choices made by the past is borne most heavily when making choices to effect the future. No technology is neutral. In seeking to solve some problem it encodes at once what its originator viewed as a problem and what was also a valid solution to the problem as framed. No technology can be fully controlled, as per Perrow's notion of the "normal accident". What is made will express its reality and affect the reality that it is placed in. Jerry Mander expresses this clearly in his "Four Arguments for the Elimination of Television" though now, maybe, the framing of this argument will seem out of date. Mander worked as an advertising executive at a time when mass media was ceding primacy from newspapers and radio toward television. The early assumptions of television were that it would have great, positive effects on mass culture. It would be the means of tele-eduction, lend an immediacy to politics and spread high-culture to all classes which had been cut off from it for want of leisure time. Television did not simply extend the existing culture into a new medium but invented one, bringing what parts of the old culture were suitable into the new one created and informed by television.

In one generation, out of hundreds of thousands in human evolution, America had become the first culture to have substituted secondary, mediated versions of experience for direct experience of the world. Interpretations and representations of the world were being accepted as experience, and the difference between the two was obscure to most of us.[42]

As I say, though, Mander's argument and, more, Neil Postman's in his "Amusing Ourselves to Death", while important, are difficult to comprehend on their own terms. We've entered a time when television as the dominate medium is in the decline and the culture it made, let alone obscured, has become distant. What has more immediacy for us now living is the change brought by the Internet, by the re-centering of mass culture on its norms. The early Internet was dominated culturally by a certain kind of person, educated, often technical and living in areas of the world with ready access to telephone networks and cheap electricity. Of these -- for reasons that are beyond this essay -- many of these people were anti-authoritarian in mood, suspicious of existing power structures but perfectly comfortable setting up new power structures centered around their strengths: capability with computers and casual indifference to the needs of others -- coded as cleverness -- foremost. Allison Parrish's "Programming is Forgetting: Toward a New Hacker Ethic" is an excellent work in this direction. Of interest to our purpose here is the early utopian scheme of the Internet: in "throwing off" existing power structures the Internet would be free to form a more just society. As John Perry Barlow said in "A Declaration of the Independence of Cyberspace"

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. (...) You have not engaged in our great and gathering conversation, nor did you create the wealth of our marketplaces.[43]

That the global Internet descended from the DARPANET, a United States funded project to build a distributed communication network that could survive a nuclear shooting war, underlies this claim somewhat. But, granting it, Barlow continues

Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract. (...) Our world is different. (...)

We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.

We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.[44]

Berrow's utopia of radical freedom -- specifically centered around radical freedom of speech -- did not exist at the time and has ultimately not come to fruition. The failure of Berrow's conception is that the Internet was not different from the world that spawned it. Much like television, the Internet did not extend a culture into a new medium but created a new culture, cannibalizing the old to make itself. As we now are acutely aware, class and race were not left behind as signifiers but were changed and in being changed were not made less important. Silence and conformity are not incidental features of the world shaped by the "Governments of the Industrial World" but are, seemingly, of human nature. At any rate if it is not, mediating conformist and silencing human interaction via computer networks does not bring forward people's best self. Computers are not magic. Of no less importance in the failure of Berrow's vision is that his vision is necessarily gated by a capital requirement. Especially in 1996, when Berrow's declaration was published, access to the global Internet was not a cheap thing, requiring semi-specialized computer hardware and knowledge to interact with effectively. In reality, Berrow's different world was merely exclusive and invested exclusivity with a kind of righteousness.

The Internet did become available to a broad swath of humanity -- which, tellingly, the original "inhabitants" of the Internet refer to as the Eternal September, a reference to the period of initiation that college students would go through at the start of every school year, September -- but not in the manner that Berrow expected. Initial inclusion was brought by companies like AOL which built "walled gardens" of content, separated off from the surrounding Internet and exclusive to paid subscription. Search engines eroded this business model but brought a new norm: "relevance" as defined by PageRank or similar algorithms. Content as such was only of value if it was referred to by other Content. Value became a complicated graph problem, substituting what was originally a matter for human discernment for a problem of computation. This value norm we see reflected in the importance of "viral" materials on our discourse today. Google succeeded in opening the walled gardens away from their subscription models. However free information wants to be, paying for the computer time to make it so is not and it is no accident that, today, the largest advertisers on the Internet are either walled gardens -- Facebook -- or search engines -- Google. Advertisement subsidizes the "free" access to content, in much the same manner that advertisement subsidizes television. The makers of content inevitably, in both mediums, change their behavior to court advertisers, elsewise they cannot exist. What is different about the Internet -- and this is entirely absent in early utopian notions -- is the capability for surveillance. Advertising models on the Internet are different from previous mediums which rely on statistical models of demographics to hit target audiences. So called "programatic" Internet advertising is built on a model of surveillance, where individual activity is tracked and recorded for long periods, compiled into machine learning models of "intent" and subsequently paired with advertisers' desires. Facebook might well be a meeting place for humanity but it is also a convenient database of user-submitted likes and dislikes, of relationships and deep personal insights to be fed into a machine whose purpose is convincing humanity to buy trivialities. Information becomes compressed to drive collection of information, collection of information becomes a main purpose of creation of information.

Technically speaking building systems to interact in the world of programatic advertising is difficult. Most exchanges -- the Googles and Facebooks of the world -- work by auction. Websites make available slots on their sites where ads may go and, in the 100 milliseconds that exists between the start of a page load and the human's perception is able to register an empty space, an auction occurs. A signal with minimally identifying information goes out from the exchanges to bidders. The bidders must use this signal to look up identifying information in their private database and, from this information, make a bid. This happens billions of times a day. These bidders, built by private companies and working off pools of information collected by same entities, work to drive, largely, clicks. That is, the ads they display after winning bids are meant to be "relevant" to the user they're displayed to, enough to make them click on the ads and interact with whatever it was that the advertiser put on the other side of the link. So do the wheels of commerce now turn. Building a system that is capable of storing identifying information about humanity in local memory and performing a machine learning computation over that information in the space of approximately 35 milliseconds -- if you are to respond in the 100 milliseconds available you must take into account the time to transmit on both sides of the transaction -- is no small matter. It takes real dedication to safety analysis of complex systems, to automated coping with the catastrophic failure of firm real-time systems to make this possible at profitable scale. It is easy, in the execution of such a system, to confuse the difficulty of its construction with its inherent quality. It is this confusion that must be fought. Is it in fact a social good to build a surveillance database on the whole of humanity to drive the sale of trinkets, ultimately so that content on the Internet can be "free" in vague accordance with the visions of wishful utopians? Maybe it is. But, if we can use the relevance norm of the Internet against itself, we note that some 11% of all Internet users now use adblock software and this percentage is growing year by year.

That the consequence of a technology is intimately tied to its initial fitness for purpose but ultimately untethered from such should be no surprise. Yet, the phrase "We really ought to keep politics out of technology," is often spoken and assumed to be correct. This is part and parcel of the reductivist mindset, one which works well in scientific discipline but, being effective in that limited domain, is carried outward and misapplied. Reductionism works only so long as we are interested in the question of how a thing is, not why a thing is. Put another way, reductionism is an intellectual ideology that is suitable for a method of learning in which learning has no effect on the underlying system. Discovering the laws of optics does not change the laws of optics. This is fundamentally unsuitable to the project of engineering. A technical artifact will change the world it is placed in and the question of why that thing is becomes fundamentally a part of it.

Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end, an end which it was already but too easy to arrive at; as railroads lead to Boston or New York. We are in great haste to construct a magnetic telegraph from Maine to Texas; but Maine and Texas, it may be, have nothing important to communicate. (...) We are eager to tunnel under the Atlantic and bring the Old World some weeks nearer to the New; but perchance the first news that will leak through into the broad, flapping American ear will be that the Princess Adelaide has the whooping cough.[45]

If we wish to construct software of quality -- whose unknown consequences are limited -- we must understand two things. Firstly, we must be aware of the immediate results of our construction. This demands a thinking through of the effect of the system to be constructed, what community it will effect and what community it might well create. This demands, as well, care toward initial behavior, of which specification and testing are key components. Secondly, we must understand that the consequence of the systems we make will grow well beyond what we can see at the outset but that this will be colored by the ambitions of the technical and political organization that spawned it. We must have introspection, not as a secondary feature but as a first class consideration of the engineering discipline. Reductionism is a powerful tool for reasoning but it is a tool with intentional limitations. It is great mistake to reduce oneself to the merely mechanical mindset implied as the correct avenue by such reason. There is so much more to the human mind.

I'd like to speak a word for good software systems. By this I mean something very simple, though it took an awfully long time to get here. A good software system is one that has been constructed with great care -- constrained through cooperation with others, constrained through probing of its possible states -- to be fit for some well-intentioned purpose. That purpose is necessarily political, solves some problem to be found in the world by someone. The technical method applied toward the software system's construction will be affected by the political demands and the approach toward those demands drawn around by the technical constraints of the modern day. It is essential to question the assumptions that are the genesis of the software system, to apply to them the best reason of your own sense of right from wrong, to probe the world that they seek by their own momentum to bring about and find it in accord with the one you seek to inhabit. The engineer of a good software system will understand that this "goodness" is fleeting, made up of the needs of a certain time and a certain place and gone almost as soon as it arrives. Knowledge grows and what is thought best changes. Randomized testing, currently, is one of the best methods for testing in its trade-offs between cost and effectiveness at probing for bugs. Improvements to the method are published regularly -- "Beginner's Luck: A Language for Property-Based Generators" by Hughes et al starts to cope with the generation issue noted by Duran and Ntafos -- but should human knowledge truly advance then, someday, randomized testing will seem overly simplistic and ineffective. That is the progression of the works of pure reason. The progression of the work of politics does not advance in this way and it is common to discount it as "trivial", thereby. This "triviality" is in fact a mask over true complexity, a domain of knowledge where there is no clear right from wrong but a slow climb up out of the shadows toward wisdom. If we are to build good software systems in this sense then we must understand that, no matter how good our intentions or, perhaps harder, no matter how fine our craft, a thing well made might in fact be a social ill. That is, the problem to be solved may not a problem -- if it ever was -- or, should it still be, may not have been solved in a way that functions now, if it ever did.

In the Thoreauvian spirit of making known what is good, I say this: the sense that the pursuit of engineering is purely an exercise of reason is wrong and we would do well to abandon this fantasy. The software we create will be made for others; it is a wooden gun to a wooden gun. The sense that the techno-political balance must be worked through superior technology is wrong too. Consider that most software systems are under-tested. This is so because of the incentives of the political organization that surrounds them. Consider as well that this very same political organization is often not capable of engaging with technical artifacts in any deep fashion. Should you like to do randomized testing, say, but find no political support for it, well, then build a testing tool that encodes randomized testing fundamentally, inextricably. Reality will move toward your tool, the frog will be boiled and the future will be encoded with your norm. The future might well rue this, understanding more than we do, but such is the nature of true progress: the past, however advanced it seemed at the time, takes on a sense of triviality and pointlessness. Progress, true progress, is not done out of arrogance -- as a demonstration of one's own talent -- but out of duty for the well-being of the future. The sense that ultimate political aims do not matter is also wrong. These, no less than the technical details of a project, must be fully understood and thought through, our own works no less than the works of others. The political aims of a technical system are a fundamental part of the design. We must probe these and make known what we find. Perhaps, say, an always on voice-activated assistant is a good to the world. Why, then, does it record everything it hears and transmit this to an unaccountable other? Was this device made to make the lives of people better or to collect information about them and entice them into a cycle of want and procurement? Just as technique moves on, so too politics. This, because we who people the world change: our needs change and what was once good may no longer be. What seems good may not be. In choosing to inflict some thing on the future -- whether by construction or by support of construction -- we must strive to make a freedom for the future to supplant it at need. We must keep the future of humanity "in the loop" of the technologies that shape their world. The techniques that we develop today are those which are the very foundation of what is to come. The politics of today makes the technique of today, the techniques the politics.

In the spirit of making known I say this: what is good is that which seeks the least constraint for those to come and advances, at no harm to others, knowledge in the present day. We are those who make the future. Our best will not be good enough but, in struggling to meet our limit, we set the baseline higher for those who will come after us. It is our responsibility so to struggle.


  1. Henry David Thoreau, Civil Disobedience (The Library of America, 2001), 203.

  2. Thoreau, Civil Disobedience, 203.

  3. Thoreau, Civil Disobedience, 224.

  4. Thoreau, Civil Disobedience, 204.

  5. Thoreau, Civil Disobedience, 204.

  6. Olin E. Teague, et al., _Automatic Train Control in Rail Rapid_, (Office of Technology Assessment, 1976), 45.

  7. Olin E. Teague, et al., Automatic Train Control in Rail Rapid, 48.

  8. Friedlander, G. D., The case of the three engineers vs. BART, (IEEE Spectrum, 11(10), 1974), 70.

  9. Friedlander, G. D., The case of the three engineers vs. BART, 69.

  10. Olin E. Teague, et al., _Automatic Train Control in Rail Rapid_, 48.

  11. Charles Perrow, Normal Accidents: Living With High-Rick Technologies, (Princeton University Press, 1999), 5.

  12. Perrow, Normal Accidents, 75.

  13. Perrow, Normal Accidents, 76.

  14. Perrow, Normal Accidents, 92.

  15. Perrow, Normal Accidents, 94.

  16. C. West Churchman, Wicked Problems, (Management Science, 1967), 14.

  17. Grigori Medvedev, The Truth About Chernobyl (BasicBooks, 1990), 32.

  18. Medvedev, The Truth About Chernobyl, 37.

  19. Medvedev, The Truth About Chernobyl, 58 - 59.

  20. David E. Hoffman, The Dead Hand: The Untold Story of the Cold War Arms Race and Its Dangerous Legacy, (Anchor Books, 2009), 245.

  21. Hans Jonas, The Imperative of Responsibility: In Search of an Ethics for the Technological Age, (University of Chicago Press, 1984), 1.

  22. Jonas, The Imperative of Responsibility: In Search of an Ethics for the Technological Age, 24.

  23. Jonas, The Imperative of Responsibility: In Search of an Ethics for the Technological Age, 163.

  24. Jonas, The Imperative of Responsibility: In Search of an Ethics for the Technological Age, 36.

  25. Jonas, The Imperative of Responsibility: In Search of an Ethics for the Technological Age, 32.

  26. John R. Garman, The BUG Heard 'Round the World: Discussion of The Software Problem Which Delayed the First Shuttle Orbital Flight, ( ACM SIGSOFT Software Engineering Notes, 6(5), 1981), 3.

  27. Garman, The BUG Heard 'Round the World: Discussion of The Software Problem Which Delayed the First Shuttle Orbital Flight, 4.

  28. Garman, The BUG Heard 'Round the World: Discussion of The Software Problem Which Delayed the First Shuttle Orbital Flight, 5.

  29. Garman, The BUG Heard 'Round the World: Discussion of The Software Problem Which Delayed the First Shuttle Orbital Flight, 6.

  30. John C. Knight and Nancy G. Leveson, An experimental evaluation of the assumption of independence in multiversion programming, (Software Engineering, SE-12(1), 96--109), 2.

  31. Knight and Leveson, An experimental evaluation of the assumption of independence in multiversion programming, 10.

  32. Knight and Leveson, An experimental evaluation of the assumption of independence in multiversion programming, 16.

  33. Knight and Leveson, An experimental evaluation of the assumption of independence in multiversion programming, 14.

  34. Robyn R. Lutz, Analyzing software requirements errors in safety-critical, embedded systems, (Proceedings of the IEEE International Symposium on Requirements Engineering, 1993), 1.

  35. Lutz, Analyzing software requirements errors in safety-critical, embedded systems, 4.

  36. Lutz, Analyzing software requirements errors in safety-critical, embedded systems, 4.

  37. Jonathan P. Bowen, Formal Specification and Documentation using Z, (International Thomson Computer Press, 1996), ix.

  38. Joe W. Duran and Simeon C. Ntafos, An Evaluation of Random Testing, (Software Engineering, SE-10(4), 1984), 443.

  39. Nancy G. Leveson, The Role of Software in Spacecraft Accidents, (Journal of Spacecraft and Rockets, 41(4), 2004), 2.

  40. Leveson, The Role of Software in Spacecraft Accidents, 4.

  41. Leveson, The Role of Software in Spacecraft Accidents, 5.

  42. Jerry Mander, Four Arguments for the Elimination of Television, (William Morrow Paperbacks; Reprint Edition, 1978), 18.

  43. John P. Barlow, A Declaration of the Independence of Cyberspace (retrieved from https://www.eff.org/cyberspace-independence, 2017).

  44. Barlow, A Declaration of the Independence of Cyberspace.

  45. Henry David Thoreau, Walden; or, Life in the Woods, (The Library of America, 1985), 363 - 364.