You are viewing archived messages.
Go here to search the history.

Ivan Reese 2023-07-03 16:42:44

Future of Coding • Episode 65

Laurence Diver • Interpreting the Rule(s) of Code: Performance, Performativity, and Production

𒂶 futureofcoding.org/episodes/065

The execution of code, by its very nature, creates the conditions of a “strong legalism” in which you must unquestioningly obey laws produced without your say, invisibly, with no chance for appeal. This is a wild idea; today’s essay is packed with them. In drawing parallels between law and computing, it gives us a new skepticism about software and the effect it has on the world. It’s also full of challenges and benchmarks and ideas for ways that code can be reimagined. The conclusion of the essay is flush with inspiration, and the references are stellar. So while it might not look it at first, this is one of the most powerful works of FoC we’ve read.

Eli Mellen 2023-07-03 16:50:19

w/no other context, my fingers’ are crossed for Jimmy Miller’s philosophy corner to be about Foucault.

Jimmy Miller 2023-07-03 16:51:07

Lol. I don't know enough about Foucault sadly. I'm more of an analytic philosophy person. But I definitely should read more Foucault.

Eli Mellen 2023-07-03 16:53:05

as is true with all philosophers of that vintage, Foucault can be problematic. Be warned. I found his various writings on power pretty interesting, and I think formative

Jason Morris 2023-07-03 21:45:25

That's WILD. Laurence and I have gotten into it on more than a couple of occasions online. Unsurprisingly, he has THOUGHTS about Rules as Code. But they have never struck me as "inspirational." Looking forward to this.

Jimmy Miller 2023-07-03 22:13:04

Jason Morris yeah I figured you might have some thoughts on this one :)

Jason Morris 2023-07-04 00:56:43

French has a standards body. 😉

Jason Morris 2023-07-04 01:59:38

The comparison of how code is interpreted and executed and how law is interpreted and executed has parallels. But what people are actually proposing to do has nothing to do with executing law programmatically. The system cannot be so easily replaced. Cryptomaximalists exist, but they are the only version of the problem he is describing that actually does exist. Nowhere else is the thing he is describing even possible, much less extant.

The idea that code is dangerous because it can be used to turn norms into laws is true, but only inside the context of non legalist structures, which means the danger is mitigated. And it is not unique to code. Policies that are blindly followed do the same thing. Laws themselves are a hardening of policy intents in that way. Contracts are a hardening of agreements, too. To the extent that code is different from these other hardenings, it is a matter of degree, because of the risk of doing the wrong thing faster, or more automatically, because of a lack of context, or human in the loop.

"Speed of execution of code prevents the possibility of reevaluating it's terms." That is it's virtue. It is not without a concomitant risk that we are doing the wrong thing, but faster. But doing the wrong thing faster is an inherent risk that is mitigated by basically all of software development. You cannot take the quality of laws we have now, and assume that they will be automated as-is. Not only is that a terrible idea that no one would support, (not even the cryptomaximalists think we should just automate contracts as they currently are) it is quite literally impossible to do. Natural language are libraries that get imported implicitly into every law and contract, and when you try to encode them, you discover you can't without requiring it because of the absence of a natural language library, among many many other issues.

The idea "you have no appeal" of code is true only if you live inside the computer. In reality, the software is owned and used by someone who can be sued. When someone stole a bunch of Bitcoin from an online contract, they were sued.

We are also taking the ex post necessity of the legal system and treating it as a virtue. The fact that you have to sue someone and ask a judge to interpret a contact is not a feature. Requiring things to go bad first is an efficiency method that we arrived at because most things go fine, and most things that don't get sorted, and it's only the remainder that actually require attention from the system, which has limited resources.

Democracy and rule of law does not arise by virtue of the availability of a referee in the absence of agreement. That is a pathetic, emaciated view of democracy and the rule of law. It arises from the 99% of cases that never require the attention of a lawyer or judge, too, and far moreso.

The important distinction between how laws and code are executed is not that one is mechanised and the other is not. It is that one is fault tolerant and self healing while the other is not. Which is why we require so much less of our laws than we do of our code. But increasingly, civil society IS being automated, particularly inside governments. And the disconnect between what the laws are and what the machines need them to be is a huge issue. We cannot pretend that laws don't need to be automated. They plainly do.

Jimmy absolutely nails it. There is no absence of ambiguity with code, it is just intentional. And the interesting parallel to that is that intentional ambiguity in law is considered valuable, accidental ambiguity is not.

Which gets to the idea that we are treating law as virtuous for requiring interpretation and having so much ambiguity, when that is not the case. Clarity and fairness are better than ambiguity and the right of appeal. Needing to appeal is a failure mode. If it were not possible, that would be awful. But that doesn't mean we should want more of it.

Code is different in degree from law, yes, but so is the way we use it. No one user tests a law. No one debugs a law. The people who write laws are not talked with their maintenance, or responsible for errors. Code is different, but that is only the tool. The processes in which that tool is used are also different, and designed to address exactly the risks he is addressing.

He seems obsessed with what we can do to programming in order to fix the problems with putting laws into code. He seems oblivious of or indifferent to all the ways that we can improve laws by using the code we already have.

The question is not how do we avoid the strong legalistic effect of our code. The question is under what circumstances is that risk appropriately mitigated and worth adopting?

And this is where it gets down to brass tacks, for me. The legal system sucks. Most people who need help can't get it. Governments are overwhelmed and underfunded. And raising this spectre of strong legalism in code, while it has the intent of protecting people from harm, is actually being used by -among other parties- a protectionist legal profession to argue directly against one of the most helpful things we could do right now, which is automated legal harm reduction. An automated system can literally be only better than nothing, and justified on that basis, because nothing is what so many people actually have.

These arguments about the rules of law and democracy are being used to stifle efforts to get real people real help with real problems because of effectively imaginary risks, or risks we already know how to mitigate.

And the horses are so far gone from the barn, they have forgotten it. Governments of sheer necessity are automating laws constantly. Those of us who call for rules as code are merely asking that they do it more consciously, and reusably, with tools designed for the task.

Does programming need to change to mitigate these risks effectively? A little. We need tools that are accessible to a much wider variety of people, that have a far smaller semantic gap between the natural language expression of the rule and the computer language expression of the rule, tools that are designed to facilitate human validation of those encodings, languages that are inherently explainable, with sophisticated reasoning, that cite their sources, that name the person whose legal interpretation was modeled, that are accessible, open source, and trustworthy. And those tools needed to be possible to use to test and validate anything else we might like to reduce the risk of. So we don't need to change all of programming, but we do need to add to it.

Eli Mellen 2023-07-04 02:25:33

it isn’t unusual to say that a program was “executed.”

likewise, orders and laws can be executed.

throughout listening to ya’ll’s discussion, and throughout reading the piece, and throughout reading the responses here, i’ve been trying to figure out how to articulate ~something~ about how power is enacted and preserved across laws and programming.

i don’t have a clear idea — but wanted to nudge the conversation in that direction by asking the question:

a state has the power to enforce laws, an individual doesn’t — not unless they’re backed by the state

does this relate to programming in some way?

Jason Morris 2023-07-04 08:08:37

I think there is more to contrast than to compare Legal power is limited. There is only so much of it. Only one set of laws can be executing in one place at a time. So getting and keeping legal power is important. Code doesn't share that property. There is little to hold power over, because almost nothing is excludable, and almost nothing is mandatory, except by force of law. Sure, programming languages constrain their users. But you can just stop using them, so the constraint is voluntary. Sovereign citizens aside, you can't just opt out of law. Show me a situation where someone is trying to use software to collect and hold power over others, and I'll show you someone who is using a combination of software and law. A license, a contract, a patent, or something.

Lu Wilson 2023-07-04 16:16:54

Lovely episode!

Regarding the death of Atom, I have to say... it's been really nice making my own VS Code theme. It took a long time but it has made it feel more 'my own' (highly recommend putting in the time).

Listening makes me think so much of Dave Ackley's stuff! He's probably coming from a very different angle, but I think the whole 'robust-first' idea relates a lot. This video would be a fun one to explore. There's loads of food-for-thought ones on the channel though (whether you agree or not).

Also, this episode made me think of the RNA vaccine code! The level of redundancy involved contrasts quite heavily with how I code in my everyday life (for obvious reasons). But it's still code! It was coded with a very different set of values + goals.

Related to Ivan's address issues, I constantly have trouble with inputting my data on online forms. My gender marker + name is different in different places, which is the same story for many trans people in this country. eg: My health details have to sometimes match hormone levels, sometimes birth sex. There's no allowance for deviation from the 'programmed' rigid boxes that are decreed to be correct. My legal name on my ID + health details is different to my passport. This is what I'm "supposed to do" to go through the required legal hoops in this country (it's outside my control - I'd rather keep it simple). If I'm registering for something with a human, I can explain this to them, so my paperwork gets done correctly. But when I'm using an online form, there's no capacity for this. The rigidness has been hard-coded in! And it consistently means that I cause errors and bugs in old+new code systems. Makes me think of people who try to change their legal name to "null" to cause trouble :) Just a little anecdote for you all.

Lu Wilson 2023-07-04 16:17:42

But you can just stop using them

I don't know, I feel pretty stuck with the languages I use sometimes - for various reasons :)

Jason Morris 2023-07-04 18:28:46

Fair. What I mean is that if you did stop using them, no one with any state-sponsored monopoly over violent persuasion would have anything to say about it. Which is admittedly a very low bar.

David Alan Hjelle 2023-07-05 13:29:20

I haven't finished the podcast yet nor following this whole discussion, and I did share this article previously but it connects here pretty well. My take-away from the article is that software, so far, has not had the expected impact on overall productivity, and that the challenge is that it is hard and expensive to model the real world within the constraints of programming.

The article: web.archive.org/web/20221206161753/https://austinvernon.eth.link/blog/softwareisprocess.html

Seems not unlike the legalism discussed in the podcast, but I'll keep listening. 😄

(edit: fixed links)

Jimmy Miller 2023-07-05 16:06:08

Jason Morris

Thank you for the detailed feedback! I think you've made some excellent points here. And maybe as an outsider I'm conflating some things you see as related. Personally I see your project of encoding laws in a computationally understandable way as a bit of a different concern than what we were talking about in this essay. I mean they are definitely related, but nothing we talked about was meant as an argument against that project. Having worked to encode complicated medical rules in a rete based rules engine, I definitely appreciate the difficulty involved, but also the benefits it can bring.

To me the most interesting part of this essay is focusing on exactly what you are talking about below:

Show me a situation where someone is trying to use software to collect and hold power over others, and I'll show you someone who is using a combination of software and law. A license, a contract, a patent, or something.

As software engineers, we often make systems that rely on the backdrop of the law to enforce things. But the way in which we enforce our side of those terms can be incredibly legalistic and harmful to users.

For example, recently a number of youtubers big and small have had their ability to monetize completely removed because of "suspicious traffic". Basically they have been accused of ad fraud. From what I've seen, even the most connected have had a difficult time solving this issue.

Here I see a classic case of the kinds of confusions we as software engineers make. What we are interested in is ad fraud (in this case). We want to stop users who are created fake bot traffic from benefitting from it. But what we actually have access to in our software systems is not whether or not someone committed ad fraud. We have numbers and correlations. But use these as if we are getting at truth. We build systems for which there is no recourse.

I did what to pull out a few things you said below, but if I missed something you'd want me to comment on, happy to.

But what people are actually proposing to do has nothing to do with executing law programmatically.

Yeah, I don't think he is claiming that. But if we were unclear on that point, that's our bad.

The idea that code is dangerous because it can be used to turn norms into laws is true, but only inside the context of non legalist structures, which means the danger is mitigated. And it is not unique to code.

My personal concern is that codes legalism leaks into the way we think about systems. Code's legalism is seen as a virtue to be emulated. Ambiguity (even intentional) and context-sensitivity are seen as bad. The distinction between what we are trying to achieve and the measurement of that achievement are conflated. (OKRs are a terrible idea)

"Speed of execution of code prevents the possibility of reevaluating it's terms." That is it's virtue. It is not without a concomitant risk that we are doing the wrong thing, but faster. But doing the wrong thing faster is an inherent risk that is mitigated by basically all of software development. You cannot take the quality of laws we have now, and assume that they will be automated as-is.

I mean, we do that in some ways. Look at the DMCA processes our youtube content. The copywrite strikes are automated, the demonetization is automated, many times even the appeals are automated. Obviously no one thinks we are going to take all our laws and automate them. But it's hard to see how not being able to reevaluate the terms is a virtue. Getting the terms right is the hardest part about software and I don't know any system that gets those terms right from the outset.

We are also taking the ex post necessity of the legal system and treating it as a virtue. The fact that you have to sue someone and ask a judge to interpret a contact is not a feature.

Yeah, I don't think anyone thinks the fact that you have to sue someone is a virtue. What is a virtue is that you have the freedom to do actions that you believe are or should be lawful and if you are arrested/fined you have the ability to appeal that decision. Contrast this with "cursing" in club penguin for example. You are immediately booted, you have no recourse. (I don't actually know if there was/is an appeal process in club penguin).

We cannot pretend that laws don't need to be automated. They plainly do.

Yeah, I agree. I see that as the point of this paper. How we can automate things in a good way? What changes can we make to make sure our automations don't have legalistic problems?

And raising this spectre of strong legalism in code, while it has the intent of protecting people from harm, is actually being used by -among other parties- a protectionist legal profession to argue directly against one of the most helpful things we could do right now, which is automated legal harm reduction. An automated system can literally be only better than nothing, and justified on that basis, because nothing is what so many people actually have.

I can definitely see how this would happen. And I can see how it would be frustrating from the position you are in. For what is worth, I don't see Diver doing this, but instead proposing ways in which we can make these systems well. That's one of the things I like about his work, it isn't an argument against using code, it is a discussion about how to do it well.

Sure, programming languages constrain their users. But you can just stop using them, so the constraint is voluntary.

Fair. What I mean is that if you did stop using them, no one with any state-sponsored monopoly over violent persuasion would have anything to say about it. Which is admittedly a very low bar.

Yeah, but other people using software that you didn't explicitly decide to use can still ruin people's lives without "state-sponsored monopoly over violent persuasion". Imagine the company I talked about that screens applications using machine learning is used by all fast-food restaurants in your area. Imagine these are the jobs you are qualified for, but the ML model has decided you will quit the job too early. Of course, you can go try and find a job elsewhere. Of course, a similar situation could happen due to human bias. But there is something very unsettling about this version of the future. The ML model can't be convinced, it can't provide reasons. It isn't a rational process whatsoever.

I think these are real harms we ought to pay attention too. I think saying people can not use software they don't like is just like saying people can move if they don't like their local laws. Both statements are generally true, but no helpful for many people.

We need tools that are accessible to a much wider variety of people, that have a far smaller semantic gap between the natural language expression of the rule and the computer language expression of the rule, tools that are designed to facilitate human validation of those encodings, languages that are inherently explainable, with sophisticated reasoning, that cite their sources, that name the person whose legal interpretation was modeled, that are accessible, open source, and trustworthy. And those tools needed to be possible to use to test and validate anything else we might like to reduce the risk of. So we don't need to change all of programming, but we do need to add to it.

This sounds super interesting. If you have any papers that argument against what this paper argued for and gives what you see as the alternative prospective, super interested in that. No promise we will do it on the podcast, but definitely interested. Ideally a paper a bit less in the technical details of how to do these things with code, and more arguing for their applications.

In general, I think what you've said here doesn't feel too much at odds with what I think we were trying to explore. I do think software has a role to play. I do think we need to automate things. I do totally get how these sorts of arguments might be used against projects like yours and that must suck. I don't think that's the aim of the argument here. You are definitely right that there is no discussion of how to use code to improve laws. I'd love to explore that further and am super happy there are people like you working on that. If you can help point us in that direction for some readings, I'd love to take a look :)

Jason Morris 2023-07-05 21:46:32

For clarity, you guys did great, it's the paper I'm giving feedback on. And I'm admittedly biased.

I would mind less if he was responding to the automation of law in software. But he claims to be responding to "Rules as Code." "Rules as Code" is not "software" is not "automation*. He sees Rules as Code as "let's automate our laws more with software", when in fact it is "let's automate our laws better with better software", and is aimed precisely at many of the evils he is warning against.

Code would make for terrible law. But "Rules as Code" doesn't call for that. What we should want is better laws and better code, and rules as code is a way to get both.

YouTube can automate unfairly. But is that a result of making code law? No. It is an automation of the Terms of Service. If it is unfair, but within the terms of the contract you agreed to with YouTube, then it is an automation of an unfair legal rule. Or, if it is an unfair automation of a fair contract, you can sue under the contract.

Code is dangerous when it impacts people negatively. That is not to do with law. The fact that you can contemplate code as a mini legalist dictatorship inside the machine is cute, I guess, but not helpful. If you come to believe encoding laws is inherently unavoidably negative, you have been lied to. If you don't, no other prescriptions logically arise from the analogy, and all the real solutions to the problem arise without it.

I'm not aware of any papers arguing in this direction other than parts of my LLM thesis, and that is not a great paper. I have a small website where I post thoughts along these lines, in the hope that there might eventually be enough of them to form a collection worth reading. Happy to pass along those links if you are interested.

Jason Morris 2023-07-05 21:49:57

The reason that his advice is weak, is because he has precluded an actual solution to the problem by conflating it with the problem itself. Rules as Code should have been the prescription, not the problem.

David Alan Hjelle 2023-07-05 23:01:30

Now that I've finished with the episode — I really enjoyed it!

I'm curious what examples — both specific implementations and of categories — of less-legalistic languages people are aware of?

In terms of syntax, LISP, SmallTalk, and Forth seem to be on the minimal-syntax-so-you-build-your-own-language train, at least to some degree.

In terms of exposing the innards of the program (like Jimmy's Black example — do you have a link?), HyperCard, SmallTalk, and…I guess the web, at least the early web, did this. Perhaps not all the way down, but a lot more than most.

What other categories of less-legalism are there?

Jimmy Miller 2023-07-07 17:25:02

One personal belief I have is that we aren't going to get to a less legalistic programming by applying legalism. Although, honestly, I'm not sure I want to use legalism here. I think that's actually a bit too specific. Lorraine Daston makes the distinction between Thick and Thin rules in her book "Rules: A Short History of What We Live By". (Podcast about it newbooksnetwork.com/lorraine-daston)

Thick rules are those that assume exceptions. They are the rules of thumb. The rules seeking to guide behavior rather than define it. So @David Alan Hjelle when you ask about less legalistic languages, my mind goes not just languages, but ecosystems. What ecosystems recognize exceptions being the norm? Which ecosystems tolerate fuzziness? One that comes to mind is Erlang/Elixir. The attitude that things will fail and we ought to make this first class feels to be in the right direction.

What else? Local-first also feels to be on that spectrum. Allowing for the freedom of the user, for the reality that offline exists, that actions might not be in the order we assumed they were in.

I think this also extends to cultural elements. Ecosystems that believe an application of straight-forward, thin rules results in all the qualities we would like will tend towards legalism. I'll let each person decide which groups they believe are doing that 🙂 But I think it is incredibly common in the programming world and something we need to address if we are to make progress. Also, if we want future of coding endeavors to take root. We must be willing to question best practices and make room for opinion and taste.

Duncan Cragg 2023-07-04 14:39:03

Hiya - I've got a new article in my Object Network "Lab Notes" Substack:

duncancragg.substack.com/p/a-3d-operating-system-without-apps

It would be great if you find it a stimulating read and dropped in some comments on the page!

Duncan Cragg 2023-07-04 14:40:18

... or subscribed of course!

Konrad Hinsen 2023-07-05 07:45:00

You say that there a no apps in the real world. Not sure I agree: I see apps as virtual tools, but tools that have grown too large and constrain their users rather than empowering them. There's a story by Cory Doctorow, about toasters that will accept only "authorized bread" (defectivebydesign.org/blog/doctorows_novella_unauthorized_bread_explains_why_we_have_fight_drm_today_avoid_grim_future). That's what apps would be like in the real world.

Irvin Hwang 2023-07-06 16:11:46

In your example do you see things like the calendar and the to-do list as being governed by separate programs/pieces of code and using physics simulation as a sort of implicit API through which things interact with each other?

It kind of makes me think of this quote from the matrix

But… look- see those birds? At some point a program was written to govern them. A program was written to watch over the trees, and the wind, the sunrise, and sunset. There are programs running all over the place. The ones doing their job, doing what they were meant to do, are invisible. You’d never even know they were here.

Duncan Cragg 2023-07-06 19:48:18

Konrad Hinsen: That's a really good point, and I completely agree about "grown too large and constrain their users rather than empowering them", but it's way worse than that excellent toaster example: what I'm describing is a virtual world where, if you tried to remove the toast to eat it, it turns to cardboard! You can only eat it by placing your face up to the toaster's Consumption Panel. Taking it further, and back to the Canonical Tool - the hammer - and the Canonical Stuff - the nail. Now, not only does the nail only allow itself to be hammered in by its very specific hammer, but the hammer has to stay in the room, or all the nails turn to cardboard, and your furniture collapses.

In this app-trap-tool world, only the specific tool gives you access to the "stuff" and animates the properties and behaviour of that stuff, in all contexts. Some stuff can sometimes actually be animated by multiple app-tools, but you still have the same dependency, the same trap replicated, and random stuff can't mix with random other stuff. Outside of their tool, stuff turns to useless "cardboard" - also know as inert "files" or "export dumps" in the app-based world of computers.

In our physical reality we know that tools are made by tools and so are "stuff" while that happens, that non-tool stuff can be repurposed as tools, and so on. There's no significant difference in the real world, except sometimes in our minds, between "tool" and "stuff". We can hammer in a nail with a rock, then use a wheelbarrow to carry a load of rocks that the wheelbarrow-maker never even imagined. We can pull out the nail at any time in the future and use it as a tool to dig out some dirt from a fence panel. Everything is remashable in any way that works.

Duncan Cragg 2023-07-06 20:01:44

@Irvin Hwang Thanks for the Matrix quote - yes that's a good description of what I mean by "internally animated". You can have animation code written in C or Python if you like, or something like spreadsheet formulae, or Realtalk. Objects interact not always or specifically through physics-based simulation, although that's a very likely medium, but simply through the "API" of mutual current-state awareness, which is basically how I believe reality works. It doesn't need to be 3D or physics state, but can be 2D current state too.

So if you attach a to-do to a calendar in 3D, there's a physical proximity and possibly a physics interaction, which is kinda lower level, and involves structural links between them, in the scenegraph. But you can also have what I call "semantic links", which go up the abstraction levels. So the to-do may know what it means when attached to a calendar event - e.g., it now knows when its job is due to be finished, etc.

Kartik Agaram 2023-07-07 06:36:50

A 20-minute video on my lived experience with Freewheeling Apps

youtu.be/aD6vmbmzdBo

Eli Mellen 2023-07-07 14:56:26

you cool with folks sharing this?

Kartik Agaram 2023-07-07 15:03:34

Yes, thanks for asking.

Gregg Irwin 2023-07-07 17:38:23

100% agree on making things simpler. The spatial aspect, being new, might benefit from an affordance for transitioning. e.g. a list or search of names which highlights or jumps to the box where it lives. It also makes me wonder what a diff would look like, for comparisons. Could be very cool.

Thanks for sharing!

Kartik Agaram 2023-07-07 17:41:08

Thanks! There is in fact a way to look up names that filters the list as you type. I showed it at one point, but it goes by quickly. I should have dwelt on it more rather than grope to find specific nodes.

Diff is something I hadn't at all considered so far!

Kartik Agaram 2023-07-07 18:10:24

One question I have is about the audio quality. Does it seem ok to people? I'm famously terrible at discerning noise, etc.

Eli Mellen 2023-07-07 18:11:22

yeah! I think the audio quality is really good — during a lot of this sort of demo I find that folks speak a bit faster than is easy to follow along with the on-screen portion. I thought you did a really rock-solid job of keeping things well timed

Gregg Irwin 2023-07-07 18:12:06

No trouble with the audio quality here.

greg kavanagh 2023-07-07 17:38:46

I’ve been working on a DSL for creating mixed media pages. Markdown meets YAML and we’re about to hire some real designers to make it look normal… however if anyone would like to play with it while it’s weird and wonky you can check it out at author.quickpoint.me

Gregg Irwin 2023-07-07 17:41:58

Hypercard lives? :^)

greg kavanagh 2023-07-07 17:50:19

You might say that. :)

greg kavanagh 2023-07-07 18:10:23

Any feedback is appreciated

Gregg Irwin 2023-07-07 18:11:35

I'll try to make time to play this weekend.

Tyler Adams 2023-07-09 17:53:55

I'm restarting my programming productivity blog, here's the latest on CLI tool design codefaster.substack.com/p/how-to-design-a-cli

📝 How to design a CLI

Vercel’s a got a great system for deploying serverless JS web apps. It Just Works(TM). Vercel’s CLI however…is a learning example. We’ll go over some of their interesting and noteworthy design choices, and then, describe what other option we could take.