Should we co-locate a workshop at Programming 2025?
Here are some questions and some ideas for answers:
I want it to be wider, if it has the same name as this community it will sound less welcoming to others
yes I think the plural is great! Disassociates it from fascist takes on the future (might sound melodramatic, but is something I often feel around tech 'disruption').
Personally though I think focus on futures is still problematic, when the 'future of programming' really needs to embrace the past. As I said before, I think the 'future of programming' should be read for its irony, as in Bret Victor's talk youtube.com/watch?v=8pTEmbeENF4
But I guess nuances are for blurbs rather than titles..
I'm stewing on the idea of a "literate codec" -- the "quite OK" ecosystem feels like a good place to start. Can anyone recommend modern alternatives to CWEB?
Two comments on opposite extremes:
Are you familiar with the QOI eco-system? This is the first I'm hearing about it, and I'm immediately suspicious of the airy "20% better compression" claim in the repo. Have other people validated this claim, do you know? It feels more believable if they claimed "20x faster encode/decode, 20x shorter implementation, 20% worse compression."
I think the project stems from the desire to make as simple of a codec / format as possible while still being somewhat performant. I do also have a hard time believing the "20x blah blah" .. I don't think that was evaluated with much rigor. Or at least, it's cherry picking favorable stats
as far as the LP part of the Q, I'm actually less concerned with it being easy to run and more concerned with it being easy to read. This all stemmed from a number of conversations I had at a conference this week about "how do we teach people how codecs work" etc.
My favorite example of this is: amazon.com/Introduction-Video-Compression-Fore-June/dp/1451522274?ref_=ast_author_dp
I'm actually less concerned with it being easy to run and more concerned with it being easy to read.
Yeah, my claim is that this is a false dichotomy. Reading and running are both contributors to helping build a mental model of a program in someone's head. Reading without running runs into all the Bret Victor criticisms we know and love here.
There are a few different LP systems out there. I've built one myself and know of several more by just people in my circle. This page is one list. But they haven't caught on much, and I think it's because conflating code with books pulls in considerations from the publishing industry that don't actually help build mental models in people's heads. Literate programs look like blog posts, and reading them doesn't actually get people to engage actively with them.
If you separate them from the publishing angle with its irrelevant constraints, other form factors seem more promising:
These kinds of circumstances are why in the past year I've started to care more about running first before I even start reading. If you can run it, the reading experience can be more fault tolerant, and it can be more economic to provide.
I get the sense I might be talking past you, so definitely let me know if I'm misunderstanding your question.
I've been trying not to plug my own stuff, but a couple of links might help triangulate where I'm coming from:
I very much agree that it takes both reading and running to engage with code, and that's indeed a major issue with traditional LP. Also with Open Source, btw, which suggests that having full access to the code is enough to understand and modify code, even if it's a huger mess and impossible to build.
That said, integrating code with a narrative becomes very relevant when you also add data (via visualizations). It's not code you engage with then, but data, computational models, etc. This is the reason why notebooks were so much more successful than traditional code-centric LP.
One major weakness of notebooks is the single narrative. What I would like to have is a graph of narratives, code, and data, everything being interactive. I am aware of two real-life systems that enable this: Glamorous Toolkit and Webstrates.
📝 Home
Glamorous Toolkit is the Moldable Development environment.
Webstrates # A Platform for Modern Computational Media Overview # Webstrates is a platform to explore software as computational media (read more in the “Background” below). Webstrates is a webserver where the pages it serves are collaboratively editable. This means that modifications to the Document Object Model (DOM), for instance, using the developer tools of a web browser, are synchronized with the server and with all other clients currently visiting the same webpage.
+1 on read and run, preferably in a system that allows per form evaluation to aid in codebase exploration
Computer programs are complex systems so it is impossible to understand them just by reading them.
For example (x ^ y) % 9 == 0
is easy to understand as code but when you run it you get something in a completely different domain with effects and relationships that you couldn't have predicted
🐦 Martin Kleppe (@aemkei) on X: Source:
<canvas id="c" width="1024" height="1024"> <script> const context = c.getContext('2d'); for (let x = 0; x < 256; x++) { for (let y = 0; y < 256; y++) { if ((x ^ y) % 9) { context.fillRect(x4, y4, 4, 4); } } } </script>
that's where liveness comes in - connecting code, domain and programmer
Self-referentially, Alex's comment just took us from abstract generalizations about code directly to the domain of code.
But now this thread connects up for me with a recent discussion on Mastodon about what 'understanding' means, and where 'understanding' lies.
Computer programs are complex systems so it is impossible to understand them just by reading them.
It is arguably also impossible to understand most programs today just by running them.
Which now connects up for me with the podcast episode on Programming as Theory-building: a lot of "understanding" a program comes from figuring out which inputs to pass to it. And I don't mean just some static list of inputs captured in a unit test. You learn the broad categories of phenomena you can expect from a domain, akin to the kinds of orbits people have discovered so far for the 3-body problem. I've learned knacks like this from working with others in the past.
(Apologies if all this seems too much of a tangent to the original thread. I can start a fresh one if so.)
I recall Xiph doing some well visualized and explained posts on their research on experimental video codec: xiph.org/daala