Konrad Hinsen
... There has to be inertia in technology if you want it to be useful.
Inertia only gets you so far. I claim that it is impossible to express asynchronous concurrency using a function-based paradigm. You can chip away at trying to get there incrementally, but you reach an asymptote that you can only get closer to, but, never arrive at.
I claim that we're witnessing this phenomenon right now .
The real questions are: (1) is it impossible to express asynchronous concurrency using a function-based paradigm, and, (2) is it really true that the crop of problems that we face are rife with asynchronous concurrency?
If (1) and (2) are true, then continuing to push on the function-based paradigm is futile and a waste of time. Take the wins from thinking that way and move on. No need to wipe out what we've already achieved. Just stop beating our heads against the wall of what cannot be achieved in the future.
Alpha-beta pruning. If the going gets tough, find a way to make it be less tough. If you end up in a corner, don't keep pushing deeper into the corner.
For stuff that lots of people depend on, you can't just pull the plug and start from scratch every couple of years.
Did I actually say that I recommend pulling the plug? I hope not. In fact, all of my experiments with this stuff are based on existing technologies, like Odin, Python, draw.io.
Concurrency is easy (even the asynchronous kind). 5-year olds, without PhDs, understand hard realtime and written notation for expressing hard realtime (piano lessons and music notation). If concurrency using a specific paradigm looks not-easy, that's a tell. If you need to keep inventing workarounds to keep pushing a belief system, that's a tell. If you espouse an edict that contradicts reality, that's a tell (e.g. "no mutation" violates the basic premise of CPUs bolted to RAM ; e.g. "control flow is data" is patently untrue (the data that control flow interprets is data, but the interpreter is not data (CPUs are interpreters, albeit very fast ones))). When the tells pile up, maybe it's time to reconsider whether forging ahead on only one path continues to be reasonable.
For that reason, the only possibility I see for your ideas to become reality is as a
generalization
There's something that I disagree with. The answers will not come from further generalization but from further specialization . Our current software development workflows are based on the idea of generalization . If you insist on incrementally developing something, how about incrementally developing a way to allow specializations to co-exist and to be composed into functioning code? (hint: I'm playing with and blogging about how to use existing languages as assemblers for new languages)
[I view, correctly or incorrectly, "Moldable Development" as another idea for how to insert specialization into our workflows.]
Future systems must allow today's software to remain usable,
Agreed.
and then open up new ways that can be explored incrementally.
You, also, need to be able to tell if you're at a dead-end and when to stop wasting your time.
Remembering and understanding how hardware works might offer up new ways to create new things. OTOH, pure research is, indeed, worthwhile. IMO, pushing on the functional paradigm has transmogrified into pure research into only one way to use ReprEMs (reprogrammable electronic machines - formerly known as "computers"). I wish to point out that there are many other vectors for pure research into the use of ReprEMs , that remain to be low-hanging fruit and might lead to more fruitful and cost-effective approaches to using ReprEMs.