You are viewing archived messages.
Go here to search the history.

Polly 2024-07-05 18:10:29

I've been kinda obsessed with this paper, "Weird Machines, Exploitability, and Provable Unexploitability" since October last year.

ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8226852

youtube.com/watch?v=1ynkWcfiwOk

(I think starting with the talk is a good idea)

After hearing about robust first computing I've been pondering the connection for a bit. I'm not really sure what to make of it, but there has to be some connection. In this talk by Dave Ackley he specifically mentions that "flipping one bit" (paraphrasing) can be enough to destroy a lot of software, which is the exact attack model that Dullien uses. It would be nice if software could maintain security invariants despite memory corruption (which Dullien proves for a small piece of software that he writes) so in that sense the two concepts allign. But I find that "continuing despite an error" or "return a best-effort answer" is the source of many security vulnerabilities, (mid example: database lookup fails, code is written in a way that forgot to handle that case, code continues anyway and returns admin data that shouldn't be shown). I wonder if any of y'all have read this or have thoughts about it. :))

Kartik Agaram 2024-07-05 23:26:15

I'm still thinking about this one, but want to share something I wrote a few years that starts out from a similar framing: akkartik.name/post/modularity

OP is much cleaner. What I refer to fuzzily as "territories of programs", OP makes precise as the distinction between the intended finite state machine and its emulation on a (fixed-memory, so you end up padding a bunch of don't-care bits in the address space) real computer.

Polly 2024-07-06 08:33:58

Ah, I see the connection! And I see how tests are a good aid for this. My take on tests is that everyone who cares about their software being correct should do them, because I don't believe anyone can make and maintain correct software, so tests will be useful to show how and where you fail. But also they don't give correctness under typical use and definitely not security (read: correctness) under adversarial conditions. And and they don't give robustness, but also i'm not quite sure what robustness is really but i think my software should have it

OP is much cleaner

More formal at least, your writing is very clear.

I feel like there's a tension between formal correctness and robustness. The whole "correctness and efficiency only" goal of current software methods means we don't think about what happens after faults or memory vulnerabilities. I think I'd like to have correctness despite faults, which is different from what Robust-first computing seems to do? I'm not sure

Noel 2024-07-07 14:02:14

All I know about robust computing comes from the Future of Coding episode on it, so this response is probably steeped in ignorance. One thing not mentioned in the episode is probabilistic algorithms. The usual setup is to assume a deterministic computer, and then introduce imperfect information (often via a hash function) giving a small probability of error but achieving a large reduction in time or space requirements. Perhaps the most used algorithm is HyperLogLog, but I think MinHash is particularly easy to understand. This isn't the same as assuming any part of the algorithm can fail, but it might provide a pathway to that model.