Future of Coding • Episode 65
Laurence Diver • Interpreting the Rule(s) of Code: Performance, Performativity, and Production
𒂶 futureofcoding.org/episodes/065
The execution of code, by its very nature, creates the conditions of a “strong legalism” in which you must unquestioningly obey laws produced without your say, invisibly, with no chance for appeal. This is a wild idea; today’s essay is packed with them. In drawing parallels between law and computing, it gives us a new skepticism about software and the effect it has on the world. It’s also full of challenges and benchmarks and ideas for ways that code can be reimagined. The conclusion of the essay is flush with inspiration, and the references are stellar. So while it might not look it at first, this is one of the most powerful works of FoC we’ve read.
w/no other context, my fingers’ are crossed for Jimmy Miller’s philosophy corner to be about Foucault.
Lol. I don't know enough about Foucault sadly. I'm more of an analytic philosophy person. But I definitely should read more Foucault.
as is true with all philosophers of that vintage, Foucault can be problematic. Be warned. I found his various writings on power pretty interesting, and I think formative
That's WILD. Laurence and I have gotten into it on more than a couple of occasions online. Unsurprisingly, he has THOUGHTS about Rules as Code. But they have never struck me as "inspirational." Looking forward to this.
Jason Morris yeah I figured you might have some thoughts on this one :)
The comparison of how code is interpreted and executed and how law is interpreted and executed has parallels. But what people are actually proposing to do has nothing to do with executing law programmatically. The system cannot be so easily replaced. Cryptomaximalists exist, but they are the only version of the problem he is describing that actually does exist. Nowhere else is the thing he is describing even possible, much less extant.
The idea that code is dangerous because it can be used to turn norms into laws is true, but only inside the context of non legalist structures, which means the danger is mitigated. And it is not unique to code. Policies that are blindly followed do the same thing. Laws themselves are a hardening of policy intents in that way. Contracts are a hardening of agreements, too. To the extent that code is different from these other hardenings, it is a matter of degree, because of the risk of doing the wrong thing faster, or more automatically, because of a lack of context, or human in the loop.
"Speed of execution of code prevents the possibility of reevaluating it's terms." That is it's virtue. It is not without a concomitant risk that we are doing the wrong thing, but faster. But doing the wrong thing faster is an inherent risk that is mitigated by basically all of software development. You cannot take the quality of laws we have now, and assume that they will be automated as-is. Not only is that a terrible idea that no one would support, (not even the cryptomaximalists think we should just automate contracts as they currently are) it is quite literally impossible to do. Natural language are libraries that get imported implicitly into every law and contract, and when you try to encode them, you discover you can't without requiring it because of the absence of a natural language library, among many many other issues.
The idea "you have no appeal" of code is true only if you live inside the computer. In reality, the software is owned and used by someone who can be sued. When someone stole a bunch of Bitcoin from an online contract, they were sued.
We are also taking the ex post necessity of the legal system and treating it as a virtue. The fact that you have to sue someone and ask a judge to interpret a contact is not a feature. Requiring things to go bad first is an efficiency method that we arrived at because most things go fine, and most things that don't get sorted, and it's only the remainder that actually require attention from the system, which has limited resources.
Democracy and rule of law does not arise by virtue of the availability of a referee in the absence of agreement. That is a pathetic, emaciated view of democracy and the rule of law. It arises from the 99% of cases that never require the attention of a lawyer or judge, too, and far moreso.
The important distinction between how laws and code are executed is not that one is mechanised and the other is not. It is that one is fault tolerant and self healing while the other is not. Which is why we require so much less of our laws than we do of our code. But increasingly, civil society IS being automated, particularly inside governments. And the disconnect between what the laws are and what the machines need them to be is a huge issue. We cannot pretend that laws don't need to be automated. They plainly do.
Jimmy absolutely nails it. There is no absence of ambiguity with code, it is just intentional. And the interesting parallel to that is that intentional ambiguity in law is considered valuable, accidental ambiguity is not.
Which gets to the idea that we are treating law as virtuous for requiring interpretation and having so much ambiguity, when that is not the case. Clarity and fairness are better than ambiguity and the right of appeal. Needing to appeal is a failure mode. If it were not possible, that would be awful. But that doesn't mean we should want more of it.
Code is different in degree from law, yes, but so is the way we use it. No one user tests a law. No one debugs a law. The people who write laws are not talked with their maintenance, or responsible for errors. Code is different, but that is only the tool. The processes in which that tool is used are also different, and designed to address exactly the risks he is addressing.
He seems obsessed with what we can do to programming in order to fix the problems with putting laws into code. He seems oblivious of or indifferent to all the ways that we can improve laws by using the code we already have.
The question is not how do we avoid the strong legalistic effect of our code. The question is under what circumstances is that risk appropriately mitigated and worth adopting?
And this is where it gets down to brass tacks, for me. The legal system sucks. Most people who need help can't get it. Governments are overwhelmed and underfunded. And raising this spectre of strong legalism in code, while it has the intent of protecting people from harm, is actually being used by -among other parties- a protectionist legal profession to argue directly against one of the most helpful things we could do right now, which is automated legal harm reduction. An automated system can literally be only better than nothing, and justified on that basis, because nothing is what so many people actually have.
These arguments about the rules of law and democracy are being used to stifle efforts to get real people real help with real problems because of effectively imaginary risks, or risks we already know how to mitigate.
And the horses are so far gone from the barn, they have forgotten it. Governments of sheer necessity are automating laws constantly. Those of us who call for rules as code are merely asking that they do it more consciously, and reusably, with tools designed for the task.
Does programming need to change to mitigate these risks effectively? A little. We need tools that are accessible to a much wider variety of people, that have a far smaller semantic gap between the natural language expression of the rule and the computer language expression of the rule, tools that are designed to facilitate human validation of those encodings, languages that are inherently explainable, with sophisticated reasoning, that cite their sources, that name the person whose legal interpretation was modeled, that are accessible, open source, and trustworthy. And those tools needed to be possible to use to test and validate anything else we might like to reduce the risk of. So we don't need to change all of programming, but we do need to add to it.
it isn’t unusual to say that a program was “executed.”
likewise, orders and laws can be executed.
throughout listening to ya’ll’s discussion, and throughout reading the piece, and throughout reading the responses here, i’ve been trying to figure out how to articulate ~something~ about how power is enacted and preserved across laws and programming.
i don’t have a clear idea — but wanted to nudge the conversation in that direction by asking the question:
a state has the power to enforce laws, an individual doesn’t — not unless they’re backed by the state
does this relate to programming in some way?
I think there is more to contrast than to compare Legal power is limited. There is only so much of it. Only one set of laws can be executing in one place at a time. So getting and keeping legal power is important. Code doesn't share that property. There is little to hold power over, because almost nothing is excludable, and almost nothing is mandatory, except by force of law. Sure, programming languages constrain their users. But you can just stop using them, so the constraint is voluntary. Sovereign citizens aside, you can't just opt out of law. Show me a situation where someone is trying to use software to collect and hold power over others, and I'll show you someone who is using a combination of software and law. A license, a contract, a patent, or something.
Lovely episode!
Regarding the death of Atom, I have to say... it's been really nice making my own VS Code theme. It took a long time but it has made it feel more 'my own' (highly recommend putting in the time).
Listening makes me think so much of Dave Ackley's stuff! He's probably coming from a very different angle, but I think the whole 'robust-first' idea relates a lot. This video would be a fun one to explore. There's loads of food-for-thought ones on the channel though (whether you agree or not).
Also, this episode made me think of the RNA vaccine code! The level of redundancy involved contrasts quite heavily with how I code in my everyday life (for obvious reasons). But it's still code! It was coded with a very different set of values + goals.
Related to Ivan's address issues, I constantly have trouble with inputting my data on online forms. My gender marker + name is different in different places, which is the same story for many trans people in this country. eg: My health details have to sometimes match hormone levels, sometimes birth sex. There's no allowance for deviation from the 'programmed' rigid boxes that are decreed to be correct. My legal name on my ID + health details is different to my passport. This is what I'm "supposed to do" to go through the required legal hoops in this country (it's outside my control - I'd rather keep it simple). If I'm registering for something with a human, I can explain this to them, so my paperwork gets done correctly. But when I'm using an online form, there's no capacity for this. The rigidness has been hard-coded in! And it consistently means that I cause errors and bugs in old+new code systems. Makes me think of people who try to change their legal name to "null" to cause trouble :) Just a little anecdote for you all.
But you can just stop using them
I don't know, I feel pretty stuck with the languages I use sometimes - for various reasons :)
Fair. What I mean is that if you did stop using them, no one with any state-sponsored monopoly over violent persuasion would have anything to say about it. Which is admittedly a very low bar.
I haven't finished the podcast yet nor following this whole discussion, and I did share this article previously but it connects here pretty well. My take-away from the article is that software, so far, has not had the expected impact on overall productivity, and that the challenge is that it is hard and expensive to model the real world within the constraints of programming.
The article: web.archive.org/web/20221206161753/https://austinvernon.eth.link/blog/softwareisprocess.html
Seems not unlike the legalism discussed in the podcast, but I'll keep listening. 😄
(edit: fixed links)
Thank you for the detailed feedback! I think you've made some excellent points here. And maybe as an outsider I'm conflating some things you see as related. Personally I see your project of encoding laws in a computationally understandable way as a bit of a different concern than what we were talking about in this essay. I mean they are definitely related, but nothing we talked about was meant as an argument against that project. Having worked to encode complicated medical rules in a rete based rules engine, I definitely appreciate the difficulty involved, but also the benefits it can bring.
To me the most interesting part of this essay is focusing on exactly what you are talking about below:
Show me a situation where someone is trying to use software to collect and hold power over others, and I'll show you someone who is using a combination of software and law. A license, a contract, a patent, or something.
As software engineers, we often make systems that rely on the backdrop of the law to enforce things. But the way in which we enforce our side of those terms can be incredibly legalistic and harmful to users.
For example, recently a number of youtubers big and small have had their ability to monetize completely removed because of "suspicious traffic". Basically they have been accused of ad fraud. From what I've seen, even the most connected have had a difficult time solving this issue.
Here I see a classic case of the kinds of confusions we as software engineers make. What we are interested in is ad fraud (in this case). We want to stop users who are created fake bot traffic from benefitting from it. But what we actually have access to in our software systems is not whether or not someone committed ad fraud. We have numbers and correlations. But use these as if we are getting at truth. We build systems for which there is no recourse.
I did what to pull out a few things you said below, but if I missed something you'd want me to comment on, happy to.
But what people are actually proposing to do has nothing to do with executing law programmatically.
Yeah, I don't think he is claiming that. But if we were unclear on that point, that's our bad.
The idea that code is dangerous because it can be used to turn norms into laws is true, but only inside the context of non legalist structures, which means the danger is mitigated. And it is not unique to code.
My personal concern is that codes legalism leaks into the way we think about systems. Code's legalism is seen as a virtue to be emulated. Ambiguity (even intentional) and context-sensitivity are seen as bad. The distinction between what we are trying to achieve and the measurement of that achievement are conflated. (OKRs are a terrible idea)
"Speed of execution of code prevents the possibility of reevaluating it's terms." That is it's virtue. It is not without a concomitant risk that we are doing the wrong thing, but faster. But doing the wrong thing faster is an inherent risk that is mitigated by basically all of software development. You cannot take the quality of laws we have now, and assume that they will be automated as-is.
I mean, we do that in some ways. Look at the DMCA processes our youtube content. The copywrite strikes are automated, the demonetization is automated, many times even the appeals are automated. Obviously no one thinks we are going to take all our laws and automate them. But it's hard to see how not being able to reevaluate the terms is a virtue. Getting the terms right is the hardest part about software and I don't know any system that gets those terms right from the outset.
We are also taking the ex post necessity of the legal system and treating it as a virtue. The fact that you have to sue someone and ask a judge to interpret a contact is not a feature.
Yeah, I don't think anyone thinks the fact that you have to sue someone is a virtue. What is a virtue is that you have the freedom to do actions that you believe are or should be lawful and if you are arrested/fined you have the ability to appeal that decision. Contrast this with "cursing" in club penguin for example. You are immediately booted, you have no recourse. (I don't actually know if there was/is an appeal process in club penguin).
We cannot pretend that laws don't need to be automated. They plainly do.
Yeah, I agree. I see that as the point of this paper. How we can automate things in a good way? What changes can we make to make sure our automations don't have legalistic problems?
And raising this spectre of strong legalism in code, while it has the intent of protecting people from harm, is actually being used by -among other parties- a protectionist legal profession to argue directly against one of the most helpful things we could do right now, which is automated legal harm reduction. An automated system can literally be only better than nothing, and justified on that basis, because nothing is what so many people actually have.
I can definitely see how this would happen. And I can see how it would be frustrating from the position you are in. For what is worth, I don't see Diver doing this, but instead proposing ways in which we can make these systems well. That's one of the things I like about his work, it isn't an argument against using code, it is a discussion about how to do it well.
Sure, programming languages constrain their users. But you can just stop using them, so the constraint is voluntary.
Fair. What I mean is that if you did stop using them, no one with any state-sponsored monopoly over violent persuasion would have anything to say about it. Which is admittedly a very low bar.
Yeah, but other people using software that you didn't explicitly decide to use can still ruin people's lives without "state-sponsored monopoly over violent persuasion". Imagine the company I talked about that screens applications using machine learning is used by all fast-food restaurants in your area. Imagine these are the jobs you are qualified for, but the ML model has decided you will quit the job too early. Of course, you can go try and find a job elsewhere. Of course, a similar situation could happen due to human bias. But there is something very unsettling about this version of the future. The ML model can't be convinced, it can't provide reasons. It isn't a rational process whatsoever.
I think these are real harms we ought to pay attention too. I think saying people can not use software they don't like is just like saying people can move if they don't like their local laws. Both statements are generally true, but no helpful for many people.
We need tools that are accessible to a much wider variety of people, that have a far smaller semantic gap between the natural language expression of the rule and the computer language expression of the rule, tools that are designed to facilitate human validation of those encodings, languages that are inherently explainable, with sophisticated reasoning, that cite their sources, that name the person whose legal interpretation was modeled, that are accessible, open source, and trustworthy. And those tools needed to be possible to use to test and validate anything else we might like to reduce the risk of. So we don't need to change all of programming, but we do need to add to it.
This sounds super interesting. If you have any papers that argument against what this paper argued for and gives what you see as the alternative prospective, super interested in that. No promise we will do it on the podcast, but definitely interested. Ideally a paper a bit less in the technical details of how to do these things with code, and more arguing for their applications.
In general, I think what you've said here doesn't feel too much at odds with what I think we were trying to explore. I do think software has a role to play. I do think we need to automate things. I do totally get how these sorts of arguments might be used against projects like yours and that must suck. I don't think that's the aim of the argument here. You are definitely right that there is no discussion of how to use code to improve laws. I'd love to explore that further and am super happy there are people like you working on that. If you can help point us in that direction for some readings, I'd love to take a look :)
For clarity, you guys did great, it's the paper I'm giving feedback on. And I'm admittedly biased.
I would mind less if he was responding to the automation of law in software. But he claims to be responding to "Rules as Code." "Rules as Code" is not "software" is not "automation*. He sees Rules as Code as "let's automate our laws more with software", when in fact it is "let's automate our laws better with better software", and is aimed precisely at many of the evils he is warning against.
Code would make for terrible law. But "Rules as Code" doesn't call for that. What we should want is better laws and better code, and rules as code is a way to get both.
YouTube can automate unfairly. But is that a result of making code law? No. It is an automation of the Terms of Service. If it is unfair, but within the terms of the contract you agreed to with YouTube, then it is an automation of an unfair legal rule. Or, if it is an unfair automation of a fair contract, you can sue under the contract.
Code is dangerous when it impacts people negatively. That is not to do with law. The fact that you can contemplate code as a mini legalist dictatorship inside the machine is cute, I guess, but not helpful. If you come to believe encoding laws is inherently unavoidably negative, you have been lied to. If you don't, no other prescriptions logically arise from the analogy, and all the real solutions to the problem arise without it.
I'm not aware of any papers arguing in this direction other than parts of my LLM thesis, and that is not a great paper. I have a small website where I post thoughts along these lines, in the hope that there might eventually be enough of them to form a collection worth reading. Happy to pass along those links if you are interested.
The reason that his advice is weak, is because he has precluded an actual solution to the problem by conflating it with the problem itself. Rules as Code should have been the prescription, not the problem.
Now that I've finished with the episode — I really enjoyed it!
I'm curious what examples — both specific implementations and of categories — of less-legalistic languages people are aware of?
In terms of syntax, LISP, SmallTalk, and Forth seem to be on the minimal-syntax-so-you-build-your-own-language train, at least to some degree.
In terms of exposing the innards of the program (like Jimmy's Black example — do you have a link?), HyperCard, SmallTalk, and…I guess the web, at least the early web, did this. Perhaps not all the way down, but a lot more than most.
What other categories of less-legalism are there?
One personal belief I have is that we aren't going to get to a less legalistic programming by applying legalism. Although, honestly, I'm not sure I want to use legalism here. I think that's actually a bit too specific. Lorraine Daston makes the distinction between Thick and Thin rules in her book "Rules: A Short History of What We Live By". (Podcast about it newbooksnetwork.com/lorraine-daston)
Thick rules are those that assume exceptions. They are the rules of thumb. The rules seeking to guide behavior rather than define it. So @David Alan Hjelle when you ask about less legalistic languages, my mind goes not just languages, but ecosystems. What ecosystems recognize exceptions being the norm? Which ecosystems tolerate fuzziness? One that comes to mind is Erlang/Elixir. The attitude that things will fail and we ought to make this first class feels to be in the right direction.
What else? Local-first also feels to be on that spectrum. Allowing for the freedom of the user, for the reality that offline exists, that actions might not be in the order we assumed they were in.
I think this also extends to cultural elements. Ecosystems that believe an application of straight-forward, thin rules results in all the qualities we would like will tend towards legalism. I'll let each person decide which groups they believe are doing that 🙂 But I think it is incredibly common in the programming world and something we need to address if we are to make progress. Also, if we want future of coding endeavors to take root. We must be willing to question best practices and make room for opinion and taste.