A large codebase is a realm ruled by the iron fist of legalism. In addition, it is a patchwork of many different jurisdictions, each with their own overlapping but subtly different laws. And it's incredibly easy to spawn a new jurisdiction!
It is easy because the terrain is often only vast and multidimensional. And instead of tending to existing areas, people love to conquer new terrain.
To a programmer, sure, a small intimately familiar codebase offers much escape from legalism, by making it realistically achievable to change the rules.
But it's also an interesting question what can we do for "end users", or even other programmers who are reluctant to invest in learning our piece of the terrain.
Do you think for a given user task, building it with less legalism for users correlates with smaller codebases? 🤔
[The question is unfair, because achieving same "functionality" with less legalism is actually very valuable and should count as "more functionality"... Very hard to quantify.]
The "conquer new terrain" metaphor is apt. People don't go off merely because we're too selfish to "tend to existing areas"; to some degree we enjoy it because it lets us make our own jurisdiction! Whereas putting up with other people's rules (and being careful not to break their use cases) is genuinely hard.
=> Conjecture: any techniques for building less legalistic software might benefit programming itself by reducing NIH tendencies and encouraging positive-sum code reuse??
Do you think for a given user task, building it with less legalism for users correlates with smaller codebases? 🤔
Proprietary programs and websites tend towards power for the authors, laws for users. Open source programs tend towards power for insiders (who can modify them), laws for outsiders. I want my small, open source Freewheeling programs to provide power for people, laws for the computer.
... (yet again, sorry) my deep belief is: insiders and authors (and researchers and assembler programmers) think that "programming" MUST consist of step-wise sequencing of electronic machines, whereas people don't want to know about programming and think in terms of free will (aka true asynchronousity, not step-wise simultaneity) . Any UX that presents tools that need to be sequenced in a step-wise (synchronous) manner will not be understood by people, and will remain in the domain of the ivory towerists. The trains will run on time, but, the tools will not be appreciated by the majority ...
Isn't lots of asynchronous orchestration also too complex for people? Orchestration is just inherently difficult to model in one's head.
Conversely, it's the most natural thing in the world for my kids to say first do this, then do that. Lots of people imagine moments in time advancing synchronously everywhere at once.
So there's a kernel of something here, but I don't think it's quite fully baked yet. Sync vs async is too blunt to be all of the answer.
Isn't lots of asynchronous orchestration also too complex for people?
Lots yes. Some no.
Let's take away the big words and observe what remains.
People - Western, English speaking people - draw diagrams on whiteboards and flip charts. The diagrams read from left to right, top to bottom. Usually the diagrams consist of boxes with some words on them, and arrows with some words on them.
Orchestration is just inherently difficult to model in one's head.
For programmers, yes, for people, not so much. As long as the result is not "too busy". I.E. diagrams cannot contain too much detail. If they want to express more detail, they flip to the next blank page and draw another not-so-busy diagram. And so on. Drawing everything on a single diagram is anathema. Good powerpoint slide decks are like that. One point per slide, advance to next slide if more detail is required.
Conversely, it's the most natural thing in the world for my kids to say first do this, then do that. Lots of people imagine moments in time advancing synchronously everywhere at once.
Yes. And your kids have no problem saying "while the potatoes cook, cut up the carrots". They don't invoke monads or futures or awaits to say this, they just say it. When they draw it on a whiteboard, it's pretty clear - one box branches out to two boxes. Left to right. Then, the two branches join back together into a single box. No rules. The person drawing the branches gets to say when the branches join. The toolmaker doesn't get to dictate. The toolmaker provides a recursive canvas (a flip-chart, or an erasable whiteboard) and provides the dry-erase markers. The person doing the drawing uses the tools to say what they mean.
Not everyone expresses what they mean in a good way. The people who do this well are promoted to "Architect" status, the rest don't get promoted.
So there's a kernel of something here, but I don't think it's quite fully baked yet. Sync vs async is too blunt to be all of the answer.
In a way, yes, in a way, no. My feeling is that this is quite simple and doesn't need to be complicated further. Encapsulate units, make the units be totally independent. Function-based programming languages do only half of the job. It is trivial to do the rest of the job. Preserve time-ordering and left-to-rightness and top-to-downness. Allow composition of such units (aka LEGO-ification). Current programming languages lean solely on the LIFO meme (stack). Simply adding a LIFO meme (queue) can break out of the step-wise (synchronous) paradigm. This is nothing new - networking protocols already do this kind of thing. I think that we can push network protocol-ization down to the programming level. Easily.
Don't use one at the exclusion of the other. Use both. LIFO and FIFO. CALL/RETURN and SEND.
LIFO is good for expressing the innards of components, FIFO is good for expressing inter-component communication.
Input and output queues are good for preserving time-ordering of data arrival and data generation.
"Programming" is the whole enchilada. Innards and inter-component communication.
The question of "why not use both?" makes sense to me. But I don't think things will be any better if we start using both. I think architects will still continue to make the same sorts of messes they do today. I think you're comparing apples and oranges, the current messy state of the status quo vs the idealized pristine state of your idea. But people will start making a mess with it one microsecond after they adopt it.