You are viewing archived messages.
Go here to search the history.

Declan 2025-04-29 14:06:43

Pure computation is pure all the way down, an advantage of this is the ability to verify a result by just putting the inputs in again. This is useful in the real world if you share a result with someone: for example when a bank tells a customer how much their mortgage repayment costs

It works not just for one result of interest, but for every other result that one depends on: in other words for the complete workings. So by sharing the pure computation code and it's inputs along with a number, the number is verifiable and you've also shared the complete workings (for free).

I exploit this as much as I can in calculang, including while developing models, with reactive visualizations showing me current model behavior for some inputs (with controls; all experimental at this stage).

It surprises me that on developer tools, functional programming is not in it's own league in front. There are some application state developer tools that FP techniques enable and some are influential. But I don't know anything about interesting introspection- or validation type tools that especially exploit purity. Anyone know if I'm missing something in particular or have any good references to read on?

I might consider a POC exploration on just this idea for some other language (maybe Haskell or PureScript but open to thoughts)

Kartik Agaram 2025-04-30 00:53:36

I finally got around to Dave Ackley's latest video report on robust-first computing which Andreas S shared 💬 #linking-together@2025-04-28, and it prompted me to zoom out and think about the territory of computation and what we know of it so far in the year 2025. Right now I imagine it as a 3D terrain. Along one axis, analogous to a plan view, I see the following coarse technical approaches to structuring computation, each equally valid:

  • Computation as the orchestration of precise instructions.
  • Computation as the learning of matrix weights.
  • Computation as the orchestration of fuzzy, imprecise cellular automata. Ackley's approach.

I think that's it? Are there others?

Along an orthogonal axis, analogous to an elevation view, I see social approaches to organizing the means of computation. So far we only have open questions here:

  • Does computational infrastructure necessarily require authoritarian dictators or at best feudal lords and vassals? Or is it possible to have something analogous to a democratic approach?
  • Can we reduce inequality between the haves and have-nots of tech knowledge and computer whispering?
  • Can we design incentives to keep computation working over time, in a secure and trustworthy way? (Can computation ever be biased less towards offense, can defense be viable?)
  • Can we design incentives to make the means of computation sustainable in their impact on the environment?

And along a second orthogonal axis, analogous to a side view, I imagine ways to connect up computation with other fields of human endeavor. Here there has been much progress, though I am running out of steam:

  • Learning from the arts to improve visual and auditory design, e.g. typography.
  • Learning from math to better model the world, e.g. numerical methods.
  • Learning from the social sciences to nudge groups of people in productive and unproductive directions. Coevolving populations with these lessons that will inevitably grow robust to such nudging.
  • ...?

Feel free to point out gaps, additional axes, add examples..

Marek Rogalski 2025-04-30 06:19:41

In the end computers tend to end up executing precise instructions anyway. The first approach that you listed is the bedrock abstraction that the second and third approaches are built on top. In principle it's possible to build machines that would implement other approaches natively but doing so is rarely practical (1, 2 & 3-d cellular automata are some of the exceptions).

I think the question that you asked ("are there others") is the main reason many of us are here 🙂 From the back of my memory, there is:

  • quantum computing (another case where building a native computer is essential)
  • spiking neural networks
  • expert systems, also fuzzy ones
  • fpga-style logic networks
  • a few cryptographic approaches (ethereum / unison / white-box cryptography / zero-knowledge proofs)
Tom Larkworthy 2025-04-30 06:28:07

There is a lot of passive mechanical computation in existence.

For an underwater robot we added floats to the bottom of our rotating wifi antenna so the antenna went vertical when on the water surface (maximizing range) but folded flat when underwater (reduce drag for locomotion).

This is a genuine computational device relying on a non-linearity and "programmed" by positioning hinges and orientated w.r.t. gravity and the environment to get a designed behaviour.

A toilet flush is quite a complicated but common one. I consider these closed loop control systems but the feedback computation is done without silicon, which is often much more robust albiet limited in expressivity. They don't require electricity to run, but in these two examples they are powered but with kinetic energy (and they lose energy via heat losses like everything powered).

Konrad Hinsen 2025-04-30 14:57:41

For the first axis, there are probably many candidates, depending on how "mainstream" we require them to be. Quantum computing is an obvious candidate, as @Marek Rogalski pointed out. There are also more exotic ideas, such as the Chemical Abstract Machine . On the other hand, I'd remove "learning of matrix weights" from the list as long as it is implemented strictly in terms of the preceding technique in the list. Just to punish laziness.

Konrad Hinsen 2025-04-30 15:00:47

We could also admit human crowds as computational agents, and then decide that markets are a form of computation. Perhaps the fundamental question is what defines computation within a wider framing, e.g. "information processing".

Kartik Agaram 2025-04-30 15:18:45

I don't care too much that one metaphor is built in terms of another, or which kind is most fundamental. That kind of reductionism seems unimportant here.

Konrad Hinsen 2025-05-01 06:43:52

I agree. But if you want to map out the space of computation, you have to say somewhere what you mean by computation, at least roughly.

I have witnessed two heated debates, between academics of different disciplines, on the question "is the brain a computer?" In both cases, it turned out after some rounds of shouting at each other that each participants had a different view of what "computer" means. At one extreme, it was "a physical system capable of processing information", at the other extreme it was a deterministic system with a clear separation of hardware and software layers.

Kartik Agaram 2025-05-01 06:58:28

Yeah. Perhaps it would help to clarify that I'm thinking in this thread about the experience of getting some desired behavior. What does the inner experience of "programming" feel like. So the technical part of my OP was, you can program by engaging with syntax and semantics, or by running tons of data through an algorithm to tweak weights, or by whatever mechanism Dave Ackley is doing that I don't understand but definitely seems much more fault tolerant than syntax and semantics.

Expert systems and relevance work do feel like first-class different experiences. Maybe related to neural networks? In all 3 the feedback loop seems definitely dilated. There's no equivalent of adding a semi-colon in response to an error message..

I don't know enough about quantum computers to know what the activity of programming them feels like. So open question. Is it just libraries that run on a coprocessor with weird time complexity properties, and some barriers to adjusting function boundaries (to avoid observing at the wrong time)? Or is there more to the experience?

FPGAs don't immediately feel different in the same way. My rough sense is you compile the same conventional languages to commands that configure logic gates. It's akin to compiling down to Verilog. Though there's some weirdness in terms of seemingly simple things in software that explode to take up tons of gates. That might make programming them feel different.

Similarly, DAOs on blockchains seem like regular programming, just less flexible? Once it goes on the ledger it's hard/impossible to change.. They're probably slower, but we also have lots of experience with slow languages 🙂 ZK proofs do feel fundamentally different.

I have some sympathy for the idea that analog computation and reversible computation feel like their own experience. But I don't know enough to be sure.

I also have some sympathy for the idea that programming a game feels very different than other kinds of programming even if you're using similar languages and tools. There's a level of mechanism design and tuning for playability that's gestalt rather than detail-oriented in a way that I've never quite been able to grok. And that barrier is always an indication it's a whole different activity.

Tom Larkworthy 2025-05-01 07:41:15

OK more on target then: programming an LLM system is quite a different beast (build benchmarks, iterate on prompt, think about context, lots of second guessing the LLMs thought process)

Kartik Agaram 2025-05-01 08:02:54

Programming with an LLM seems a little bit like being a manager. Which gets to Konrad Hinsen's idea of other people as a substrate to delegate work to. Arguably the oldest programming of all.

Tom Larkworthy 2025-05-01 10:34:39

Using an LLM (1) is different to building a system around LLMs (2). 1. is managerial. 2. is herding cats and involves a lot of probabilistic reasoning.

Kartik Agaram 2025-05-01 14:55:57

Is 2 what you meant in your previous comment? I don't quite understand this kind of system building. Say more?

Tom Larkworthy 2025-05-01 15:35:08

Yeah, building a system that uses an LLM internally ends up more like applied science. In normal programming you get deterministic results. When you embed an LLM you get this thing that kinda does what you want some of the time, so you have to deal with the rate of failure in a methodical way, and you tweak things to improve the rate of failure. So its quite a different way of programming.

This is different than using an LLM, coz in that situation you have a goal and you are both coordinating to achieve that goal, so some of the work gets done by you (e.g. code assistance). If you are giving an LLM powered product to lots of people, you have more unknowns and you can't apply test-driven-development or anything like that. So its different. Its still development, but its quite different to distributed systems where the rate of failure is relatively low and there are clear reasons why it behaves the way it did. LLMs you are programming where every function call is a roll of the dice modified by free text. The methodical way of taming that is evals at scale, which doesn't fit in a CI process, costs money etc. etc.

Konrad Hinsen 2025-05-02 06:03:01

Kartik Agaram Focusing on the experience makes this a much clearer exercise!

For quantum computing, the main experience is the paperwork of getting access to one. Once you do, it feels much like batch processing in the past: you submit your program via some front-end (a standard computer), and wait for the result. The program has syntax much like a standard program, but its semantics are wildly different.

Analog computing is more interesting. It feels much more like doing an experiment in a lab than like running software.

Josh Bleecher Snyder 2025-04-30 16:29:03

Naive question (apologies in advance): What does convention vs configuration mean in a visual programming language? Are there interesting examples of this playing out in practice?

Arvind Thyagarajan 2025-04-30 17:06:33

If I understand the question correctly...

I see it as a tradeoff between expressive power in the visual medium and reducing cognitive load to accelerate the production of "known patterns" of run-of-the-mill software products.

But I don't think that's the only reading. At the granular level I think some conventions in some visual programming paradigms flow intuitively from humans working with materials and do result in the medium feeling ready-to-hand (say, dragging cards around on a 2D canvas, or inputs grouped and separated from outputs)?

Josh Bleecher Snyder 2025-04-30 17:22:18

I guess if one were to lean hard into the distinction as used in textual programming, conventions (as opposed to configurations) are stronger than that--it's not that inputs and outputs are usually grouped for clarity, but that what makes them inputs and outputs are the fact that they are grouped in a certain way.

Spencer Fleming 2025-04-30 23:47:59

This reminds me of an old USENIX talk that I found very inspiring, on Eidetic Systems, aka recording every input into the OS, write only, forever

usenix.org/conference/osdi14/technical-sessions/presentation/devecsery

[March 13th, 2025 8:04 PM] guyren: I think economic incentives in the development of technology have strongly favoured big business. We want to write large apps with lots of users that can run efficiently on AWS.

But computing is staggeringly cheap. If we are willing to entertain “inefficiency”, we can make small business and individual user software in very different ways to what we do now.

This and other aspects of the economics of all this lead me to believe that our default when storing data is that it is a write-only store. The “current” version of a row is the one with the latest timestamp.

It is easy enough to roll event sourcing into this. We already “store” incoming requests — in the stupid text log file, if nowhere else. If instead, we store full, structured inputs to each request coming into a system in a database table (because we’re all about relations), then we arrive at the results of an input to a system are the results of triggers on those inputs.

Step back, and consider the larger picture: every state the system was ever in can be reviewed. Every input to the system is recorded, and every state transition.

Now, we circle back to small business software, and to my other bugbear: we don’t make software for non-developers to solve their own problems.

But if you put a FileMaker-like interface in front of that write-only store, and you think about augmenting that UI with tools to explore its history, I think you really have something.

Future of programming? Give me this system, with a Datalog query interface, and I can replace most of what I do in a traditional programming language with queries. Traditional programming is relegated to side-effecting or efficiency-concerned stuff.

Spencer Fleming 2025-04-30 23:48:26

With that, you can roll back any local computation, branch off of it, etc.

Spencer Fleming 2025-04-30 23:49:25

Still trying to work out how much ought to be the OS's job and how much ought to be the App's job

Konrad Hinsen 2025-05-01 06:39:45

And to what degree the owner of the computer or data should be able to override eternal storage, to prevent this from becoming a privacy nightmare.

Spencer Fleming 2025-05-01 14:21:30

Right. You can always replace "old snapshot -> inputs" with "new snapshot" and prevent any history from moving past the new snapshot. Its also possible to make this choice separately on a per-conponent basis.

Spencer Fleming 2025-05-01 14:24:07

I also think if someone other than Microsoft is doing it, then it's less of a nightmare. I don't want their sketchy AI adware looking at screenshots of anything, but being able to scrub backwards on my own 'version controlled' OS sounds lovely. Git, Ink&Switch CRDTs, Browser History etc. all record a pretty detailed history as a feature

Konrad Hinsen 2025-05-01 16:35:08

Microsoft inspecting your history is probably the worst-case scenario. But if history is stored at all, it presents a privacy risk. Imagine your computer being confiscated by the police of an authoritarian state with the goal of finding something that can justify putting you in jail. You'd probably prefer to be able to erase such information completely (and verifiably).

Spencer Fleming 2025-05-01 17:50:46

Oh totally! That's true of all useful data though. I agree it would need controls to be able to manage what's on your computer and what risks you're willing to have.

Spencer Fleming 2025-05-01 17:51:16

Same way it's very nice to have version controlled document history but you can also rewrite or delete it when you please

Alex McLean 2025-05-02 08:27:09

Trying to stay on topic, when have you most feared clicking on something? How could you have been reassured via humane programming language experience design?

Konrad Hinsen 2025-05-02 14:10:56

I just had one of those fear-of-clicking experiences. A frequent one for me, being cause by Zotero. Every now and then, when I visit the site of some scientific journal, Zotero Connector (a Firefox extension) displays a banner asking if I want it to install a proxy for accessing this site. I have no idea what consequences this would imply. I have no idea either if Zotero is aware of the complexities of my journal access setup (which involves an OAUTH site and a certificate stored in my browser). So I see a lot of risk of breaking something, in exchange for an advantage that isn't quite clear to me.

The worst: I cannot tell Zotero to leave me alone. About once per week, it asks me again. And I fear that one day, I will click on "yes" by mistake.

Alex McLean 2025-05-02 16:46:18

Ah I'm quite fearful of zotero as well! In theory it's great for sharing references and PDFs with a group of people, except it's very easy to do destructive operations that can't be undone, and then are difficult to spot that they've happened. I've had to rebuild community bibliographies from scratch, so I'm particularly fearful of accepting someone's application to a group, in case they accidentally delete stuff.

Alex McLean 2025-05-02 16:47:47

I guess fear can be quite a useful emotion as well. E.g. fear of pressing 'submit' on a funding application you've been working on for months helps you with double checking everything is OK.

Alex McLean 2025-05-02 16:48:48

It's quite hard relating this to programming though, as things you do are generally low stakes at that moment, unless you're live coding.

Lu Wilson 2025-05-03 07:41:15

clicking on an email or message that I'm nervous about reading

Lu Wilson 2025-05-03 07:42:01

i find notifications scary. big red warning signs with growing numbers inside them. it would probably be less scary if they were more subtle, less attention grabbing

Alex McLean 2025-05-03 08:35:42

Typing not clicking but the sudo command used to really scare people with a "This incident will be reported." error message as well as giving a slightly odd message the first time you successfully used it

Alex McLean 2025-05-03 08:40:14

Every time I switch my car on I have to accept terms of use that allow the manufacturer to track my movements. Similarly the terrible Google, meta etc terms we accept every day as a mild annoyance to access some community should really strike fear into us but doesn't..

Lu Wilson 2025-05-03 08:45:20

hitting "send" on a message you're not sure about

Alex McLean 2025-05-03 10:25:47

Just reading a PhD thesis "What live coders fear most is a bored audience." So mistakes, unintended behaviour, crashes are all positive, but smooth running, fulfilled expectations etc are potentially to be feared

Lu Wilson 2025-05-03 10:55:07

right, standing there frozen, panicking and doing nothing!

Andreas S. 2025-05-02 11:52:43

Have you seen this? I liked the Cultural perspective of it: youtube.com/watch?v=08PPuE-Y1sE

Lu Wilson 2025-05-03 07:39:55

reflecting on jam oriented programming...

i can't believe i spent so many years being dictator of my projects. what a waste.

i now do the jamming approach... it means i MUST accept all changes, even if i disagree with them. if i care enough, i can change them or revert them, but that takes effort, so they usually stay. and for ease, i make everyone admin of my own project. if you submit a pull request or an issue, i just instantly merge and make you admin. then i don't need to be a blocker in future: you can commit straight to main

it means the project becomes ten times richer because it's a team effort with everyone pulling it in different directions

nothing has to be perfect, and it gets done FAST.

it's more open than open source. it's jam source!

each day it becomes more hilarious/tragic to me how most HCI and "future of coding" developers keep things so closed off and secret, now that I've experienced this better way

Christopher Shank 2025-05-03 07:45:27

This reminds me of an article on how the default of the web is implicit feudalism. Feels similar to how we make software.

colorado.edu/lab/medlab/2021/01/08/implicit-feudalism-why-online-communities-still-havent-caught-my-mothers-garden-club

📝 Implicit Feudalism: Why Online Communities Still Haven’t Caught Up with My Mother’s Garden Club

Alongside whatever else mothers and sons talk about, I have begun receiving regular updates on the governance of my mother’s neighborhood garden club. They make

Lu Wilson 2025-05-03 07:53:23

right! thanks for the link. I've been thinking how github repos default to dictatorship, and how unhealthy that is for the ecosystem

Lu Wilson 2025-05-03 07:55:35

there's some more info about jam oriented programming here: pastagang.cc/paper

it's a work in progress paper intended to be submitted in about a month. the paper is getting jam written by tens of people

Christopher Shank 2025-05-03 08:29:49

Curious what things you see github doing?

Christopher Shank 2025-05-03 08:31:20

Does the main branch have protection on by default? Although it certainly nudges you towards that by the notification to set branch protections

Christopher Shank 2025-05-03 08:33:06

📝 The OSI First to Endorse United Nations Open Source Principles | Office of Information and Communications Technology

The United Nations Open Source United community and the Open Source Initiative (OSI) today announced that the OSI has become the first organization to officially endorse the UN Open Source Principles. The UN Open Source Principles, recently adopted by the UN Chief Executive Board's Digital Technology Network (DTN), provide guidelines to drive collaboration and Open Source

Image from iOS

Lu Wilson 2025-05-03 08:33:43

it's not my paper

Lu Wilson 2025-05-03 08:34:23

no it's nothing specific to GitHub, rather that the whole open source approach of benevolent dictator is shit

Lu Wilson 2025-05-03 08:35:12

when you make a repo, only you can edit it. pull requests need to be reviewed by the glorious leader

Lu Wilson 2025-05-03 08:35:22

forks are second class

Christopher Shank 2025-05-03 08:35:38

Pastagang’s paper* 😄

Lu Wilson 2025-05-03 08:36:46

to be clear i personally wrote like less than 5% of the content of that paper

Lu Wilson 2025-05-03 08:37:03

it's not just a word game

Christopher Shank 2025-05-03 08:43:53

Ya my bad

Marek Rogalski 2025-05-03 10:21:47

Rebellion is fairly cheap in the world of open-source though. A dictator can usually be toppled with a single fork.

Lu Wilson 2025-05-03 10:22:54

not cheap enough!!!!

Konrad Hinsen 2025-05-03 12:31:26

One problem I see with forks is that they are asymmetric. There's always "the original" and "the fork". GitHub makes it very obvious which is which.

Something I'd like to try is a pool of repos that are equals. Everyone can see everyone else's changes, adopt some and reject others, but without any notion of hierarchy or convergence to a consensus version. Git allows this, but today's forges don't. I am not aware either of any tool support for working in a pool of equals.

Lu Wilson 2025-05-03 14:31:57

having one big jam repo is great honestly. being forced to accept every change (even the ones you want to reject) is important

Kartik Agaram 2025-05-03 16:01:53

Story time: back in 2009, some people on Hacker News got together to create a community fork of Paul Graham's Arc Lisp that anyone could modify. We called it the Arc Wiki and I suggested the name anarki. It is still around.

Back then Github had a setting to let you make a repo public so anyone could push to it. It disappeared around 10 years ago, in a move to be more enterprise friendly that nobody remembers anymore, and that seems minuscule compared to everything they've done since.

When that checkbox disappeared we adapted to it by saying anyone can get push permissions. Just come ask.

I was obsessed with the Arc Forum to an extent that I still find my fingers typing out the url when I'm out walking. My fingers dream that it is 2014 and they're at a keyboard. The forum never had more than a half dozen people but it was surprisingly active for its size. The forum was the jam session, anarki was just a tool. On one level my story since has been about trying to find ways to make the jamming more acute, all while finding the jamming get less acute.

I sense that there are many ways to create jamming scenes. I've seen projects like Tiddlywiki and oh-my-zsh start out extremely permissive in letting people add changes to them. Over time they slow down as it gets harder and harder to make changes while maintaining any sort of sense of stability or continuity.

So it depends on what different people want. These days I think it's all an experience. Random walks and sprints to a destination all have their place, and they mix together in the world anyway if you think of it all as one giant repo under the sky.

Today I maintain 50 or so forks containing various apps, but none of the codeforges can tell that they're forks of each other or determine which one is the root, because I clone and reupload each fork from scratch. That information is in the git logs. They're true git forks arranged in a network rather than github forks arranged in a tree. The jamming still doesn't happen, though. The future seldom cooperates. Still, I feel I'm going somewhere interesting, and so is everyone else.

Prosperity and computers have radically expanded the space of things humans can do. As we expand into this space the default density of people drops. Each of us homes divergent urges to explore alongside convergent urges to explore together.

Lu Wilson 2025-05-03 17:14:12

Over time they slow down as it gets harder and harder to make changes while maintaining any sort of sense of stability or continuity.

i guess this is a big part of it. letting go of stability and continuity (and control) is an important part of jam oriented programming

Kartik Agaram 2025-05-03 17:21:33

Yeah.

It's not just stability and continuity for the author/dictator. Every community has a tendency to prioritize the needs of people who show up early on. The minority in the present captures the infinite future.

Lu Wilson 2025-05-03 17:22:59

there's certainly that risk for sure. much less risk than not taking the jamming approach i think

Maximilian Ernestus 2025-05-03 17:59:33

Reminds me of the do-ocracy we practiced in our shared flat in my 20ies: Anybody is allowed to change anything (in the communal rooms). Everything is shared (except toothbrushes) BUT no hard feelings when you just undo anything. To make this work we had semi-regular meetings where we talk real, little playfights in the hallway to resolve tensions and regular shared meals. Not sure how that can be transferred to the net. I think this physical presence/contact was crucial.

Lu Wilson 2025-05-03 18:01:38

this kind of thing has been happening at pastagang.cc for quite a while now

and very recently pondiverse.com

📝 pastagang

jam code

Lu Wilson 2025-05-03 18:01:49

in-person and remote

Maximilian Ernestus 2025-05-03 18:13:31

Yes. I discovered the pastagang before. Watched you jam in awe but did not dare to touch the contraption. Too much stuck in old patterns I guess.

Konrad Hinsen 2025-05-03 19:02:12

It's interesting to see that all of you mostly focus on the permissions aspect. For me that's rather secondary. With git, everyone has a local copy, so nobody can do serious damage to anyone else. So let everyone have whatever permissions on a shared repo, that's an administrative detail.

What I miss for peer-to-peer collaboration is discovery and exploration tools. I'd like to be able to check easily what everybody has been working on over the last week. Or find the branches across all repos that have a specific version of some piece of code. And I am not thinking only of jamming, but also of long-term, slow, asynchronous. Code that a hundred people use, but which only sees two or three changes per year.

Lu Wilson 2025-05-03 19:27:25

perhaps it seems only secondary to you but it isn't at all! the whole point is that you want to subject yourself to the danger of other people's changes on yourself

Jamie Brandon 2025-05-04 01:26:47

I guess this is the wikipedia approach. They have to have moderation tools at that scale, but the model is still allow-first and worry-later.

Jamie Brandon 2025-05-04 01:28:05

For live-coding, I wonder if different languages would allow this to scale better. Eg if you have 1000s of people editing an imperative program, at any given point it probably doesn't run at all. But in a dataflow program, if part of the graph is broken then that part of the graph just doesn't put out values any more, but everyone else is still getting feedback on what they are doing.

Lu Wilson 2025-05-04 07:00:46

i'm hesitant to suggest that different or new languages are needed because i want people to know that this approach is possible to do already with our existing tools - especially on the live coding front. see youtube.com/watch?v=HCcSHMu0gzg

Lu Wilson 2025-05-04 07:02:57

and yes, part of the "agreement" of a jam is that: yes you can make any edit. this also means that anyone can reverse your edit.

there are other strategies you can take to scale and influence people's behaviour too, other than central control

Konrad Hinsen 2025-05-04 08:07:18

Lu Wilson So you don't keep a local repo at all? The shared one on a forge is the only one you have? Or is it simply a social convention that the current state of the official repo is the only one from which anyone may move on?

Marek Rogalski 2025-05-03 11:54:51

One of the early promises of computer revolution was the universal access to knowledge and culture. Out of curiosity I've just checked how many hours of video would fit on an average HDD. Assuming average HDD size of 11.6TB (as reported by Segate) and an aggressive, but watchable compression (1GB = 3h of video) we would get a total of 34800 hours of video. Apparently this is around (maybe even slightly above) the total runtime of the whole Netflix's library.

My conclusion is that it should be now possible to buy Netflix on a drive.