Future of Coding 70 • Beyond Efficiency by @Dave Ackley
Dave Ackley’s paper Beyond Efficiency is three pages long. With just these three pages, he mounts a compelling argument against the conventional way we engineer software. Instead of inflexibly insisting upon correctness, maybe allow a lil slop? Instead of chasing peak performance with cache and clever tricks, maybe measure many times before you cut. So in this episode, we’re putting every CEO in the guillotine… (oh, that stands for “correctness and efficiency only”, don’t put us on a list)… and considering when, where, and how to do the robust thing.
I was unaware of the Knuth response to Naur. Thanks for mentioning it! I found a copy at
tug.org/TUGboat/tb10-4/tb26complete.pdf
Knuth invented a new kind of documentation, one that hardly anyone uses, but that is specifically designed for communicating how a program works to other human beings.
Knuth has also expended great effort in the study of other people's code and programs, including code written in long dead programming languages.
If there is anyone in the world capable of transcending the limits described by Peter Naur, both by transmitting the theory of a program and by recreating it, it would be Donald Knuth. I see no reason to doubt the truth of Knuth's claims, but I also don't see them as contradicting Naur.
Naur does not claim it is impossible to revive a program in practical terms, only that it is difficult, frustrating, and time-consuming, and "may lead to a revived theory that differs from the one originally had by the program authors and so may contain discrepancies with the program text." I believe his point is that you cannot be certain the revived theory is the same as the original theory, however I do not have enough experience with literate programming to judge Knuth's claim that a well-written literate program might have a good chance of being accurately revived.
Calling the stored program computer a "von Neumann model" does a tremendous disservice to J. Presper Eckert who invented and wrote up the idea around 6 months before von Neumann joined the ENIAC project. See the book A History of Computing in the Twentieth Century for a copy of the original memo.
von Neumann wrote a draft report that was widely shared informally (en.m.wikipedia.org/wiki/First_Draft_of_a_Report_on_the_EDVAC), but to the best of my knowledge he never claimed the ideas were his. He was writing up the ENIAC team's plans for the EDVAC.
Y'all may also enjoy von Neumann's paper "PROBABILISTIC LOGICS AND THE SYNTHESIS OF RELIABLE
ORGANISMS FROM UNRELIABLE COMPONENTS." static.ias.edu/pitp/archive/2012files/Probabilistic_Logics.pdf
I was glad to hear the new discussion generated by Programming as Theory Building - both the episode and the paper. It is my favourite episode and was very influential on me!
I appreciated the examples of non-distributed systems that benefit from robustness that had to do with being robust to ~programmer error~ . That type of error is harder to characterize than the random bit-flipping of cosmic rays because it’s so human, but it’s the type of error that I most often think of robustness in terms of.
I didn’t have as good a word for it before. “Defensive programming” doesn’t really capture it.
Implementing invariants directly like Jimmy mentioned. Sort the thing every time if it’s supposed to be sorted, rather than trying to maintain that property indirectly. It’s not just about doing the easiest thing first, or avoiding premature optimization. It’s like, when I mess up code elsewhere, how do I make sure that this part won’t make it worse.
I still haven’t read the paper but one aspect of the episode I found interesting was that having simpler software avoids bugs. It seems like this is being conflated with the idea of sacrificing efficiency for robustness. Where sometimes the simpler code/algorithm is in fact less robust and the more robust implementation requires more code (and potentially more bugs).
I’d be interested in trying to disentangle the robustness from the simplicity dimensions when making tradeoffs. So finding new ways to structure software to be inherently more robust to bugs seems compelling yet difficult.
Overall the contention between correctness, efficiency, and robustness seems to arise from the viewpoint that correctness is a binary proposition rather than a probabilistic measurement of the values we want our software to achieve. If we have a myopic view of correctness we’re leaving all the tradeoffs off the table.
And yeah — I'm no friend to binary views of correctness! Glad to be reminded of that.
Ivan Reese loved the musical interlude and the mix on the quotation effect seemed perfectly dialed in.
Reminds me of how analog computers can be more robust because they arent susceptible to things like accidental, cosmic ray style, bit flips causing a major change in the value of the computation.
Right. Though they then need to be robust against, say, results being influenced by ambient temperature :)
Deutsch discusses digital versus analogue at length in The Beginning Of Infinity , here's a bit from that chapter-
... during lengthy computations, the accumulation of errors due to things like imperfectly constructed components, thermal fluctuations, and random outside influences makes analogue computers wander off the intended computational path. This may sound like a minor or parochial consideration. But it is quite the opposite. Without error-correction all information processing, and hence all knowledge-creation, is necessarily bounded. ... So all universal computers are digital; and all use error-correction with the same basic logic that I have just described, though with many different implementations. Thus Babbage’s computers assigned only ten different meanings to the whole continuum of angles at which a cogwheel might be oriented. Making the representation digital in that way allowed the cogs to carry out error-correction automatically: after each step, any slight drift in the orientation of the wheel away from its ten ideal positions would immediately be corrected back to the nearest one as it clicked into place. Assigning meanings to the whole continuum of angles would nominally have allowed each wheel to carry (infinitely) more information; but, in reality, information that cannot be reliably retrieved is not really being stored.
An analog virtue / limitation is that you cannot have a huge tower of abstraction because noise accumulates: indirection has a direct cost!
Relevant reading folks might enjoy: The dry history of liquid computers
Thanks for the great episode. "The Fiverr Vaccine" was super funny. And I loved reading the paper.
I started out robustbrained. I was ready to salute the robustness flag. I started memorizing the robustness national anthem (which is twice as long as it needs to be).
But now it feels like that's missing the point...
I should be saluting the local first flag! I should sing the permacomputing national anthem and get my hair done at the convivial computing salon! These are actual value systems that imagine a different world and say "this would be better". Robustness is a means to an end, just like efficiency.