You are viewing archived messages.
Go here to search the history.

Paul Tarvydas 2025-09-05 03:44:19
Konrad Hinsen 2025-09-05 08:48:32

I mostly agree with this analysis. The main challenge I see to coming up with better architectures at various levels is the inertia of existing software that we all depend on. Better architecture needs to come with a transition path from current technology.

You mention containers as one example. They allow to turn a blob of current technology into a relatively isolated component (relatively because in practice, most containers I see need access to the host's local file system). But communication between containers, and thus building systems from containers, is still at a prototype level at best. And the techniques we use to make containers from software components are baroque and brittle as well.

Daniel Buckmaster 2025-09-06 01:19:49

I'd have preferred some more specifics here. For example, while I haven't touched much electrical engineering since university, I remember spending a lot of time looking at pinouts and timing diagrams to work out how to conform to the API a particular chip needed. How is that different to type checking?

Paul Tarvydas 2025-09-06 02:16:02

Konrad Hinsen I agree, but I don't see the challenge as being very daunting. The answers lie in plain sight. I think we saw the early glimmers of solutions in the 1970s/1980s, e.g. with things like UNIX processes.

The original point of programming languages was to cause hardware to act in certain ways. The idea of "a programming language" was a caveman approach to programming hardware, necessitated by the limitations of early hardware, storage devices, etc. There was a pre-Cambrian explosion of various ways to build small programming paradigms, e.g. Forth, Lisp, Icon, Prolog, Smalltalk, etc., etc. Even FP was invented early on, but was sneered at for being too bloatful (today, it's still bloatful, but monetarily cheaper). One of the best ideas for programming hardware - UNIX and other O/Ss - were kinda side-stepped as fundamental programming paradigms in the rush to push FP forward.

The word "programming" has been repurposed to mean using only one of the possible options - FP, PVRsubroutine-based ("functions") - to program hardware. In fact, FP's sweet spot is that of converting hardware into calculators, but our "we've always done it this way" mentality has pushed non-FP concepts into FP and into CPU chips. CompSci has become the study of how to further push FP forward as THE single chosen notation for programming.

It appears that the basics of UNIX-y processes have been overlooked. CompSci relies on the existence of processes as scaffolding for FP, but overlooks the option to use these very principles as the substrate for program development, i.e. pure dataflow disentangled from accompanying control flow, total isolation, multi-language (actually multi-paradigm) and instead concentrates on pushing forward the outdated concepts of "programming languages".

The 0D / PBP stuff I've been tinkering with shows that processes can be liberated from heavy-weight concepts using more modern, but very common techniques (closures, queues, GC). OhmJS shows that we can rapidly build less-pathetic syntaxes than what we had in early paradigm-specific languages. T2T shows that we don't have to build compilers any more (we just need to build transpilers that map new languages onto already-existing compilers). The 0D / PBP stuff also shows that we can treat off-the-shelf drawing editors like program editors without being enslaved by 600 year old Gutenberg type-setting ideas. The current 0D / PBP stuff shows that we can use multiple languages in creating a single program (e.g. in one case, I built an SCN that used Prolog + Javascript + bash)

Konrad Hinsen 2025-09-06 07:59:08

Paul Tarvydas The two challenges I see as daunting are (1) making this work efficiently on today's hardware (and its similar descendants that will dominate for years to come), and (2) convincing enough people to join the movement. I can't possibly write all the code I need myself, at least not for my professional needs.

Technically, the biggest obstacle I see is implementing a dataflow-like architecture for big datasets. meaning data the size of a typical machine's memory. I do a lot of that.

Paul Tarvydas 2025-09-06 13:19:25

Konrad Hinsen

  1. Issue #1 is a Present of Programming issue instead of Future of Programming issue. Certainly, my focus is on programming workflow and I am probably missing nuances pertinent to your problem domain.
  1. Interesting point. I see this as a "simple" FLI issue and/or something that AI could repeatedly solve and cookie-cut for me. Maybe that's too facile. You point out that the issue isn't so much a technical one as a marketing one. Hmmm. More thought required.
  1. Optimization and speeding things up brings complexity baggage. The trade-off is between "obvious" understandability of architecture and "fast enough for your purposes". When we worked on Y2K, we didn't mind weekend-long runs to process a bank's subsystem. That was "fast enough for our purposes". That was like 30 years ago (mid 1990s). I keep being astounded at how fast the machine on my office desk is. I can run a REPL by spawning multiple UNIX processes (language in one process, GUI in another, joined by websockets), instead of needing the REPL to be built into the language itself. Theoretically, I could afford to buy/use many such machines networked together. Hmmm.
Konrad Hinsen 2025-09-07 08:00:06

Paul Tarvydas

  • It's a present-and-near-future issue because hardware architectures don't change overnight. Software and hardware architectures have to evolve together because any mismatch between the two leads to either performance loss or software complexity.
  • Yes, it's a social issue. Call it marketing if you want, but for me, marketing has acquired such a negative connotation that I avoid to use the term altogether. And again, habits won't change overnight, so this really is a planning-for-a-transition issue that addresses all aspects. i.e. social, hardware, software.
  • "Fast enough for your purposes" works when there is a clear meaning of "fast enough". Often it's human time spans, such as tolerable delays in interactions. But for other applications, such as much of scientific computing, there is no fast enough. People aim for the maximum speed that the hardware can deliver, even at the cost of messy software. Worse, they even accept faulty software as long as the mistakes aren't glaringly obvious. In theory, everyone agrees that correct matters more than fast. But fast is easy to measure, whereas correct is a very abstract notion for many applications, and easily gets interpreted as "not obviously wrong."