Thinking about coroutines and whether we can reduce bloat by rolling-our-own... youtu.be/PSYj2OQwi8Y
š„ Coroutine Secrets
I used Apple Keynote and Descript. I'm on a Mac. I took screenshots of various emacs windows then just dragged them into Keynote slides (Keynote is Apple's Powerpoint). I created the slides, then made a video (.mp4) of me reading them and loaded them into descript.com. Descript transcribes the spoken part of the video and then lets you edit the video by editing the text like with a text editor instead of the common, but painful, timeline editing workflow. I got Descript to re-speak my reading with an AI voice (I sound less enthusiastic and halting, which is an editing nightmare opposed to the constant flow of the AI voice). I type very easily and quickly and tend to bloviate - using Keynote gives me instant feedback when I'm not getting to the point, via font changes. I used to use Linux for dev, but, now that I write a lot (50:50 dev:writing), I find the Mac to be better than Linux (which was better than Windows :-). I tried various markdown thingies, but I like the WYSIWYG-ness of Apple Pages and Apple Keynote (better than Obsidian on Mac and anything on Linux).
Thanks for sharing! Very interesting about descript.com that it lets you edit through text, that's quite interesting!
Trying to link this to stuff I know. What's the difference between a bare metal corouting and cooperative multitasking? Does Akka qualify? Are nginx state machines the same thing? The bare metal qualification confuses me, does it need to be native instructions or something? I think nginx concurrency model might be a similar thing even then?
blog.nginx.org/blog/inside-nginx-how-we-designed-for-performance-scale
still very much supports your thesis this is a great way to acheive high performance on modest resources.
Bare metal: I was reaching for words. By "bare metal" I meant "no operating system, no using someone else's idea of how to generalize for what problem you're trying to solve". There should be nothing "new" here. Just a "hang on, let's get back to basic principles and see if that brings any (re)fresh insights". I'll look through the ref you gave and comment further...
At the end you mentioned something called 0D. Could you elaborate on that?
50,000 foot view: 0D is a minimum viable VPL that avoids implicit synchronousity and sequentialism. It looks like just a run-of-the-mill node-and-arrow diagramming thing, but goes down the path of everything-is-concurrent-by-default and layers-instead-of-infinite-canvases and multiple-inputs-and-multiple-outputs and data-flow as events. There are no basic restrictions on mixing diagrams and text to express programs.
I think that the combination of these ideas simplifies programming by more than 10x. The ideas are in production at Kagi.com.
This week, I'm calling it PBP (Parts Based Programming).
The connection to this video is that I was pondering about maybe going back to a simpler hardware model by not using virtual memory and not using preemptive context-switching. Parts would be a natural fit for what I was writing about. One should be able to have 1000s of Parts sitting inside a simple machine - and - one should be able to program the whole thing using the current manifestation of computers. I'm building things like code transpilers with it, and that happens to be useful in its own right. Among other things, I've written a VHLL using draw.io and "compile" the drawings to running Python, Javascript and, Common Lisp, (I used to successfully target Odin as well) and, the basics of a visual shell.
If you continue to be interested, I am willing to supply endless detail, including pointers to code repos.
Oh, I dislike virtual memory as well. After program memory became "private by default" we got fewer bugs - true, but it also made it a cultural taboo to access memory of other programs... The non-preemptive context switching bit sounds interesting - how are rogue processes handled then? For instance a buggy component that entered an infinite loop and user wants to stop it.
Non-context-switching is essentially green-threading or coroutining within an app.
I have the view that there are 2 kinds of computers (1) for non-programmers (2) for developers. Non-programmers should never have rogue programs delivered to them (that's not what happens today, except in cartridge-y gaming systems). Developers should be the only ones who would need preemption. Let 'em have preemption, just don't deliver code to non-programmers that needs preemption.
OTOH, just _thinking _this way leads to (IMO) simpler ways to construct programs. Developers use the current manifestation of dev operating systems like Linux. Non-programmers can continue to use operating systems like MacOS and Windows, until they no longer need to do so (years, decades?). The win of making all memory private is exacerbated by making all programs be composed of Parts. Not just memory, but control-flow, too, must be private. Aside: FP does not isolate control flow except through the use of preemptive operating systems and lots of baggage. Today, we know how to build closures and how to build queues and we now how to copy data and later GC it. That's enough to re-think all of the complication that has led up to this point. This isn't anything new, we see the seeds of it in UNIX pipelines, but, UNIX pipelines come with too much FP-ish baggage (in fact, UNIX processes got the idea mostly right, it's /bin/sh that over-constrained the idea. Well, then there is the problem that UNIX pipelines have exactly 1 in and exactly 1 out. This was painful to unconstrain using C, but can easily be unconstrained today.
the Javascript event loop is a cooperative multitasker, and it is indeed easy to work with because you never need to worry about concurrency inside function blocks and everything was designed with cooperative multitasking in mind so you sorta get pretty high IO performance without really thinking about it. Lots of other things are bad for performance in the language (e.g. no integer type) but the event loop is actually pretty great to work with, and indeed you can hang the whole app by doing a busy loop but that only seems to happen during development of a recursive function š¤
you never need to worry about concurrency inside function blocks
Agreed,,, but I think that there is a high cost to doing this. Current hardware hides this cost. Can we gain back some chip real-estate if we eliminate all of these extra doo-dads that are scaffolding for no-worry function blocks?
And, we really should be thinking about concurrency. Especially when faced with asynchronous, distributed things like internet, robotics, IoT. Concurrency has a bad name in function-based programming circles. I suggest that concurrency is easy, except when you try to express it in terms of functions. By not-thinking about concurrency, we end up with gotchas like callback hell, await, etc. Harel showed a syntax, in 1986, "StateCharts" for dealing with concurrency that eliminated the really-bad aspects of the approach, i.e. "state explosion". Before even that, EEs were successfully using concurrency on a grand scale. (I count at least 100 concurrent, parallel components in the Atari Pong 1972 circuit. The circuit did not contain a CPU nor any sequential code).
In 1986, our hardware could not - easily - handle Statechart notation. Today's hardware can.
I believe that PBP is an advancement over Statecharts. I haven't needed to use a Statechart for ages. And, I'm able to do things that I previously thought were very hard - like building "compilers". The quotes are there because I use some cheats that aren't directly related to PBP. Thinking in terms of PBP, though, helped me think up these cheats and not be afraid of implementing them.
An observation about the main-loop thing. The big win is to isolate control-flow, as well as data. Functions look good on paper, but when implemented on a computer, you get hidden, low-level coupling which leads to hidden uses of context-switching and virtual memory (e.g. the callstack, which is a data structure constructed dynamically at runtime and causes unpredictable, unstructured blocking and calcifies routing decisions in a non-networky way). The main loop is a (potentially) big lump of code. What if you chopped it up into smaller lumps of code, each with their own "main loops"?
Oh yeah, I have used staecharts for realtime microcontrollers, quantum leap. It was that knowledge that made me think nginx had converged on the same solution. I get you. Yeah, no need for stacks for the scheduler in that world, extremely efficient. Also statecharts are somewhat close to being formally verifiable, I wrote a thing on that once, expressing a 2-phase commit a state chart verified with Computational Tree Logic (CTL). It was that blog that got me a job at Firebase. Concurrency like expressing a 2-phase commit is extremely hard to get right, and I totally agree functions are a useless abstraction in this context as the real meat is on the cross-product of two interacting processes statespaces, really you have to enumerate everything otherwise the weirdest bugs can creep in that defy imagination.
My feeling is that we no longer need to restrict ourselves to using only the function-based paradigm. Today's hardware enables us to break free of that mindset. From that perspective, all of our current programmings languages are "the same". A single paradigm (function-based) with little syntactic baubles to make that single paradigm less painful when used to solve problems that fall outside of that paradigm. Interestingly, I'm finding it much easier to think about once-hard problems, like building compilers, when I am not restricted to thinking about them in a single paradigm, i.e. only functional. The term "general purpose programming language" is just good marketing - our languages are not "general purpose" and "C" is not "close to the hardware".