Paul Tarvydas đ°ď¸ 2024-07-05 14:36:09 pondâring aloud:
I wonder if the problem with VPLs is the word âlanguageâ.
It appears to me that the word âprogrammingâ has been generally accepted to mean âsequential languageâ or writing sequential codes (aka âcodingâ). I view this view as being too restrictive. Programming is more than just commanding a machine with sequentialistic instructions. Programming a CPU , though, is - by definition - sequentialistic. But,,, programming a machine(s) need not be sequentialistic. Especially in the age of nothing-is-central. In fact, LLMs are an example of non-sequentialism. The machines that run LLMs were programmed, arduously, in sequential notation, but, the inner success of LLMs is not sequential, but something else (massively parallel plinko?).
VPLs and DPLs are, to me, not sequentialistic things. Maybe they should be called ânotationsâ instead of âlanguagesâ? VNP and DNP? Visual Notation for Programming, Diagrammatic Notation for Programming? [In which case, âprogramming languagesâ as we know them, are TNPs - Textual Notations for Programming].
In fact, programming is not the difficult part. Re-programming is the novel aspect of Design that computers bring to the world. We have been programming machines to do single things for centuries (using metal lathes, etc.). This time through, though, we have built machines capable of doing many things.
Guyren Howe 2024-07-15 21:24:40 That folks synonimise programming with programming languages is hugely distorting.
The most successful non-programmer programming systems are Excel and end-user databases like FileMaker and access.
Excel has declarative formulas but much of the programming is in the structure of the spreadsheets.
A relational database with a strong user-modifiable UI is a great way of solving problems that would otherwise need a programmer. Our whole industry has almost ignored the Relational Model because it has been synonimised with the execrable SQL.
I am working toward a system that resembles FileMaker but that brings in any and all data that I can into the same UI. Imagine FileMaker but there are relations representing your calendar, your filesystem, your email and social media feeds, a vast array of online data sources and services, all in one UI and freely joinable etc.
This is the way to enable non-programmers to solve their computing problems.
I would throw in regular programming also , but a relational query interface over everything just solves so many problems.
Ivan Reese 2024-07-18 05:11:27 This masto thread should resonate with folks here. Teaser:
fediverse is the kind of place where I can ask a question of âletâs say weâre designing an operating system from scratch. Clean slate. Letâs throw away all our old habits and legacy decisions. whatâs the minimal set of applications we need to make a new operating system usefulâ
and the top replies are vt100 emulator, virtual machine to run other operating systems, and c compiler to port vim
like yâall are missing the point of the question!
Dany 2024-07-18 08:47:24 Reminds me of a similar question to some of the top economists, "if we could redo our tax system from scratch, how would that look?"
They all basically said, its good the way it is, doesn't need change.
I think there are several forces at play:
"You don't know enough, so you feel like all this complexity is unnecessary"
"You know all the details and lost the big picture",
"You have stakes in the game"
"You resist change"
"Something out of the ordinary does not get funded"
"The idea, actually just adds to the complexity"
"There is no transition from old to new"
"The new idea is just bad"
Stefan Lesser 2024-07-18 09:10:32 If you look at a complex system that has evolved out of thousands (millions?) of little decisions over a long period of time, and then ask to redesign such a system from scratch, itâs just overwhelming.
There are no good answers to the question, because all the answers we have found in the old system came out of this long grinding process. To answer the question, weâd have to go on another long journey of making thousands of little decisions one by one all over again.
I bet if we tried, weâd end up somewhere completely different this time. But itâs hard to convince people to do it all over again. Seems so inefficient.
But if you see it (oversimplified) as a binary decision tree of 1000 decisions made over time, it would be rather spectacular if we nailed each of those 1000 decisions the first time such that we would take exactly the same path again.
Shalabh 2024-07-18 19:34:21 The path depends on the ongoing context as well. What specific hardware tech is available and feasible. What ideas click and get funding, or get picked up by a business that happens to take off due to market factors.
Konrad Hinsen 2024-07-19 08:39:37 If you look at complex social systems such as tax rules, big changes happen only after some major breakdown: after wars, revolutions, etc. Inversely, accumulated unsolved problems are the cause of such major breakdowns. The interesting fundamental question thus is if complex systems can undergo major change in an evolutionary rather then disruptive way. I don't know the answer.
In a well-delimited technology context, starting from scratch is a realistic approach in a research setting, but not in real-life applications. Research projects of this kind can then influence the evolution of mainstream tech. That looks like a reasonable way to evolve rather than disrupt working systems.
An underappreciated concept in this space is the narrow waist (oilshell.org/cross-ref.html?tag=narrow-waist#narrow-waist): a system layer that permits independent evolution both above and below it.
Dany 2024-07-19 09:31:59 Isn't the "narrow waist" part of the problem and not the solution? One could argue that CPU <-> GPU divide is a narrow waist. Both are developed independently, but at this point we're really putting a smaller supercomputer into a bigger computer.
The communication between the two is a big source of complexity in todays compute.
Stefan Lesser 2024-07-19 10:00:37 I have also been struggling with the ânarrow waistâ concept. Iâm not entirely sure how to properly distinguish it from or how it interacts with layer architectures (which seem to be the primary example) and separation between interface and implementation.
There is something important all of these point at, which has to do with stabilizing one part while allowing another to change. But the distinction isnât as clear as they all make it look.
The description I like most so far is the one Herbert Simon makes in ~The Architecture of Complexity~ , where he talks about ânearly decomposable systemsâ. That seems to capture the not quite clear boundary between what can change and what is stable best.
And I have a suspicion that there is no clear boundary that can be drawn, and that the subtle interactions between different scales, what makes Simon call it ~nearly~ decomposable, are actually not a bug but a feature.
Paul Tarvydas 2024-07-19 11:37:10
stabilizing one part while allowing another to change
Question: Is the /bin/sh pipe operator |
a (restricted) form of narrow waist?
Shalabh 2024-07-19 17:28:48
In a well-delimited technology context, starting from scratch is a realistic approach in a research setting, but not in real-life applications. Research projects of this kind can then influence the evolution of mainstream tech. That looks like a reasonable way to evolve rather than disrupt working systems.
Indeed. Any foc style work or ideation we do should completely ignore the status quo for this reason. I feel if you think about product-market-fit you've already lost. Depends on your goals of course, I'm taking about research.
Konrad Hinsen 2024-07-19 19:13:21 @Dany I wouldn't call CPU/GPU a narrow waist. CPU and GPU are not different technology layers. CPUs are not implemented in terms of GPUs, nor the other way round.
Konrad Hinsen 2024-07-19 19:15:30 Stefan Lesser I agree that Simon's perspective is still a very useful one, though rarely adopted in practice. But I see this as distinct from the narrow waist context, which is about layers, not interacting modules.
Shalabh 2024-07-19 20:06:59 A good example of a narrow waist is in the internet architecture. TCP/IP is the waist with heterogeneity and evolution both above and below it.
I believe the concept of a file is also a narrow waist. Not only do you have different filesystems underneath, you can also map it to different mediums (SSD, HDD, optical, remote). Of course there is the entire world of applications above it.
Kartik Agaram 2024-07-19 20:51:33 Ok, I'm gonna do one more comment on the subject of how come I don't remember all this from The architecture of complexity . Hopefully it's not too off-topic.
In 2 pages from page 8 to page 10, the paper makes the statement that I think Stefan was pointing at above:
- When subsystems are out of equilibrium, you can ignore macroscopic interactions between subsystems. They're in the noise compared to the churn going on within each subsystem.
- When subsystems are in equilibrium, cross-subsystem interactions dominate. You can even summarize each subsystem with a few gross aggregate metrics.
This duality is really interesting! It connects up with Christopher Alexander's A City is not a Tree (patternlanguage.com/archive/cityisnotatree.html)
However, in the rest of the (17-page) paper Simon focuses exclusively on the second bullet. The result is to belabor something we moderns at least get told all the time: to manage complexity, divide and conquer. This is why I totally missed the gold.
I think I'm saying I appreciate Stefan Lesser for highlighting this point almost more than I appreciate Herb Simon đ
Perhaps an alternative explanation is that Herb Simon had several thoughts on the subject and put them into a single paper. If so, a unitary introduction and conclusion feels counter-productive. This is a series of short stories, not a novella.
Stefan Lesser 2024-07-19 21:12:15 Shalabh Chaturvedi Thatâs the example I had in mind. So, ok, if thatâs a narrow waist, then is, say, LLVM also a narrow waist (between compiler front- and backends), or is that something else?
Shalabh 2024-07-19 21:18:49 Layering is essential in narrow waists, but not vice versa. You can have layering without large ecosystems above and below a specific narrow layer. I create systems with layers, but they are no ecosystems above and below it - it's just a cylinder shape, not an hourglass.
Shalabh 2024-07-19 21:21:34 From the original link above
The narrow waist (of an hourglass) is a software concept that solves an interoperability problem, avoiding an O(M Ă N) explosion.
So LLVM certainly fits that definition. Without LLMV you'd have M Ă N, specifically (C, C++, Rust, Haskell, ...) Ă (x86, arm, mips, ...)
Stefan Lesser 2024-07-19 21:28:27 Shalabh Chaturvedi Thanks, that makes sense. Now Iâm wondering: What would be a good example for a layer architecture that doesnât have a narrow waist?
Shalabh 2024-07-19 21:39:13 Layers are essential to narrow waist but not vice versa. I create systems all the time with layers, but no special narrow layer that has diversity above and below. I have a web app handing request above and an db access layer below, with an actual db underneath - it's like a cylinder. There are no ecosystems involved. You can say libraries like sqlalchemy orm try to do a narrow waist, but have not been as successful. The leaky abstraction principle also applies - some waists work better in their context than others.
Shalabh 2024-07-19 21:42:47 The downside of well established narrow waists is you cant move the waist because it is so deeply entrenched in systems and minds. So with TCP/IP, you have all different physical protocols designed and optimized to cater to its specific features. Same with LLVM backends. Similar to how "and then our tools shape us", language designers comfortable with LLVM may end up designing languages that can be easily mapped to it.
Kartik Agaram 2024-07-19 21:42:52 Sounds like layers can have two purposes:
- 'vertically' to separate concerns
- 'horizontally' as a narrow waist
Stefan Lesser 2024-07-19 21:50:38 Kartik Agaram Thanks. I found the room temperature example useful to illustrate this.
Simon also points out in that paper that itâs not quite clear if we see hierarchies everywhere because they are everywhere, or just because we are adapted to see hierarchies. Thatâs also pretty much just a sentence or paragraph at most, but in my view one of the most thoughtful observations in there.
Iâve mentioned Alicia Juarrero somewhere here before who wrote two books that take this idea of interactions across subsystems on different hierarchy levels much further with a comprehensive theory about constraints. In case you were looking for another rabbit hole⌠:)
Shalabh 2024-07-19 21:55:20 Kartik - I didn't get the vertical vs horizontal - can you elaborate / give an example of vertical?
Kartik Agaram 2024-07-19 22:01:50 Shalabh Chaturvedi I was imagining the layers stacked one on another. So as you move vertically you cross layer boundaries.
Imagine you have a system. As you separate concerns it's often natural to have one concern treat another as a black box. Caller vs callee. You're basically creating a layer boundary here. So this is what I think of as 'vertical'.
But another reason to create a layer doesn't start from a system at all. Instead you have a bunch of systems of two kinds that want to talk across a requirement/provider boundary. I imagine these alternatives lined up horizontally in two lines, one above another (but here 'vertical' doesn't mean caller-callee, hehe. It's symmetric; either side can initiate a connection). In this case you form a new system "fully formed from the brow of Zeus" as it were, to intermediate the two sides. This is the thin waist. Before it existed there was cacophony. After it exists you suddenly find yourself in a layered architecture.
Shalabh 2024-07-19 22:21:31 yeah makes sense. "interoperability standards" often seem synonymous with narrow waists. it's not about a single system but an ecosystem.
Stefan Lesser 2024-07-19 22:25:07 Konrad Hinsen About Simonâs approach: What do you mean by ârarely adoptedâ? Simon was trying to describe and explain complex systems. Iâm not sure there was anything to adopt?
I think his observations do apply to layer architectures. First, in the form of leaky abstractions. But even if the interfaces are well specified and achieve good separation, which is incredibly difficult to achieve, there are still subtle effects that can be ignored most of the time, but then sometimes do shine through, like, I donât know, packet size constraints on the IP layer causing performance issues on the HTTP layer or something like that.
Konrad Hinsen 2024-07-20 16:00:25 Stefan Lesser I meant rarely adopted when designing software systems. Simon's discussion is about both natural (evolved) systems and about human-made artifacts (and he says it's for the same reason of economy in construction). His artifact example, a watch, ends up made from nearly decomposable subsystems not through insight into complex systems, but because watchmakers are clever people and end up designing watches in a way that is easier for them to build. In software, I don't see this happening. On the contrary, it is very difficult to achieve such a design, because our toolboxes are set up for strong coupling of submodules via shared dependencies.
Kartik Agaram 2024-07-18 15:46:25 We need visual programming. No, not like that.
Let's observe what developers
do
, not what they
say
.
Developers do spend the time to visualize aspects of their code but rarely the logic itself. They visualize other aspects of their software that are
> important, implicit, and hard to understand
> . Here are some visualizations that I encounter often in
> serious contexts of use
> :
- Various ways to visualize the codebase overall.
- Diagrams that show how computers are connected in a network
- Diagrams that show how data is laid out in memory
- Transition diagrams for state machines.
- Swimlane diagrams for request / response protocols.
This
is the visual programming developers are asking for.
Kartik Agaram 2024-07-18 15:47:09 Oh wait, this is of a piece with the previous post. I suppose it's also ok at the top-level..
Kartik Agaram 2024-07-18 15:53:02 Now I'm wondering what a programming language looks like that makes it easy to create such visualizations and keep them updated over time.
My biases make me go first to Lisp, but in practice it's actually no easier to parse Lisp on a semantic level (e.g. detecting new variable scopes) than any other language.
Maybe Glamorous Toolkit? Konrad Hinsen Tudor Girba
Tudor Girba 2024-07-18 15:59:25 The visualizations you are talking about are first and foremost for reading, not for writing, which is indeed the most costly and painful problem in software engineering today. As such, this is not a programming language issue, but an environment issue. Moldable Development is a systematic method for doing exactly that. It turns out that reading needs are orthogonal with writing needs. Glamorous Toolkit is the most advanced and extensive environment that shows how far doing this systematically can get you. I do expect people will copy GT, and they very much should. My only worry is that they will not copy everything.
Kartik Agaram 2024-07-18 16:01:00 Thanks! Does GT currently support visualizations like call graphs (I'm sure), automatically extracting from code visualizations like state machines, heat maps, time sequence diagrams? Any programming language..
Tudor Girba 2024-07-18 16:35:36 The idea is to not restrict the specific visualizations, but rather to enable one to construct custom visualizations of arbitrary input data. It is many times more valuable and cheaper to build custom visualizations than generic ones. There patterns language and the components are reusable, but the specific visual representations are less interesting for reuse.
The question about programming languages is interesting. We can think of it as âare there parsers and semantic importers for language Xâ. But we can also think of it as âhow do we build a parser and an importer for language Xâ. The former will lead to a library of parsers (like that built around ANTLr). The latter will lead to a dedicated environment for building parsers and importers faster. In GT we show that it is possible to have both, but really the interesting one is how to adjust to a language that you might not know.
Konrad Hinsen 2024-07-19 08:28:37 That's indeed the main lesson learned (for me) from a few years in "moldable development" land. Support for situated development tools is much more valuable than generic development tools. The tools are better adapted to the specific task, and perhaps even more importantly, the user understands the situated tool perfectly well.
I suspect that all this remains true if you cross out "development" - it should apply to any software tool. My near-term goal is "moldable data science".
Ivan Reese 2024-07-18 15:50:07 As a follow-up to the above, there's this nice blog post from @Nikita Prokopov on diagrams as code: Where Should Visual Programming Go?
Level 3: Diagrams are code
This is what the endgame should be IMO. Some things are better represented as text. Some are best understood visually. We should mix and match what works best on a case-by-case basis. Donât try to visualize simple code. Donât try to write code where a diagram is better.
Hear, hear!
Kartik Agaram 2024-07-18 19:40:18
Think of it as a game engine like Godot or Unity. In them, you can write normal text code, but you can also create and edit scenes. These scenes are stored in their own files, have specialized editors that know how to edit them, and have no code representation. Because why? The visual way
in this particular case
is better.
Joshua Horowitz 2024-07-19 05:47:53 âLevel 3â, when itâs not code & diagram as universal parallel representations but domain-specific visual syntaxes (as @Nikita Prokopov says he wants) has been explored a number of times. Here is the relevant section from arxiv.org/pdf/2303.06777, with some hopefully helpful citations.
Tom Larkworthy 2024-07-19 10:23:06 Yeah this is basically notebook programming. There are two levels to what you want to diagram too. 1. the program specification (e.g. the source or configuration) or 2. The runtime state. Text source code is just a narrow view of 1. I think that can be enhanced with diagrams across for both 1 and 2. observablehq.com/plot is incredibly flexible for quite a huge range of visualizations (grammar of graphics). I use it for schematic like diagrams, but obviously it also does more recognisable mathy charts too.
Joshua Horowitz 2024-07-19 15:41:07 Tom Larkworthy This isnât âbasically notebook programmingâ if thatâs taken in the sense of notebook programming today, right? Notebooks today canât use graphics to define part of a program, only to visualize data. Hereâs another screenshot from my paper I linked above.
Joshua Horowitz 2024-07-19 15:42:11 I begin to wonder whether my paper is relevant to this discussion and could perhaps provide a clarifying framework.
Tom Larkworthy 2024-07-19 16:18:07 notebooks are definitely capable of reading external data and making dynamic, executable decisions based on it. The also have a ton of affordances for representing data visually, and for implementing UIs inline. So together all those things seem like level 3 to me, but some assembly is required. The browser itself has local storage and the env has internet access so I think persistence is a red herring if you consider the system as a whole.
Ivan Reese 2024-07-19 16:31:06 What @Nikita Prokopov is arguing for in level 3, and what Joshua Horowitz seems to be referring to, is the desire for such tools that don't require "some assembly". The amount of assembly needed to go from notebook to programming-with-diagrams is about the same amount of assembly to go from smalltalk to programming-with-diagrams or the web platform to programming-with-diagrams. So while notebooks certainly are nice environments to work in, and while it would be nice if notebooks offered p-w-d out of the box, I don't think it's true that "this is basically notebook programming", unless it's also true that this is basically smalltalk or basically javascript.
Joshua Horowitz 2024-07-19 20:15:35 Tom Larkworthy: Notebooks are great at displaying runtime state. You can also use them to render UIs, with which you can modify transient state. But the persistent code that makes up a notebook can only be edited through the notebookâs code editor. If I want to edit a state chart graphically and have that be part of the code my notebook runs, I canât do that with Observable or Jupyter or whatever without very awkward workflows (like copy-pasting text from an Observable output cell into a code cell). Interactions with graphical things rendered in a notebook donât survive a browser refresh. So they do not pull off the @Nikita Prokopov level-3 thing.
Joshua Horowitz 2024-07-19 20:16:14 (There are some Jupyter extensions like mage that partially fix this.)
Joshua Horowitz 2024-07-19 20:18:46 I roughly agree with Ivan Reeseâs response, except that notebooks do have a slight advantage â if Iâm ok editing a âdiagrammatic programâ through textual data, then I can do that in Observable, and also have Observable visualize the âdiagrammatic programâ, live, in the place where Iâm editing it. Doing that in a trad dev environment would make it harder to see that visualization.
My issue is that I donât want to edit a diagrammatic program through textual data â I wanna edit it through a diagram!
Joshua Horowitz 2024-07-19 20:25:25 Kartik Agaram That website is an interesting case, because there weâre not talking about a program-authoring environment being live and/or rich; weâre talking about a (one-off) program-reading environment being live and/or rich. I agree that itâs kinda live! (Wish it had a more fine-grained liveness tho â like donât you want to see the relativeLuminance values, not just the final output?) As a test rig of the code, itâs kinda rich, in that youâre editing the test inputs with a direct-manipulation editor⌠but it feels weird to call that rich, cuz weâre not editing the program in a rich way, just some test inputs. But I see where youâre coming from.
Tom Larkworthy 2024-07-19 20:56:11 observable views are a UI + a data-channel and they can be 2-way bound to other UI components, composed hierarchically, or even bound to persistence [ local view, shared view] . Diagrammatic editing is a little hard, and this conversation has made me try to see if Plot can be made editable (they do expose their grammar-of-graphics scales and an interactive pointer mark which seems relevant)
Joshua Horowitz 2024-07-19 21:21:57 Tom Larkworthy The only part of your comment which is relevant to what Iâm talking about is the part about local/shared persistence. I appreciate the effort youâve put into getting around Observableâs lack of support for this stuff! But those approaches are limited and awkward:
- If weâre talking about using UI to edit your notebookâs code, local persistence is a non-starter⌠the point of notebooks is to share code.
- The shared view has no authentication. So if you used it to author part of the notebookâs code, anyone with access to the notebook would be able to modify itâs code. If you added authentication, you would then have two separate, parallel auth systems to keep synchronized â one for Observable & one for your sharing system.
- In both local + shared situations, you have to add IDs by hand whenever you want to persist something. If you clone a cell, it keeps the same ID, so it edits the same underlying (local or cloud) state. You gotta remember to change them or youâll clobber stuff. Same for cloning / forking notebooks.
In short, I think these approaches all get very awkward, arguably no less awkward than copy-pasting code into cells. Iâd be curious to hear whether youâve heard of them being used successfully in practice, in the face of these problems.
Personally, Iâd like it if Observable could add persistent UI-editable state, so we wouldnât have to make up work-arounds. But I donât see that happening, given that the company is no longer focusing on the notebook product. Perhaps some other notebook product could get in the game, maybe something open-source?
Joshua Horowitz 2024-07-19 21:24:09 (As for interactivity in Observable Plot, which is besides any point Iâm making but a good question: I know that lack of good interactivity is one of the main reasons my advisor hasnât switched his data vis course to use Observable Plot.)
Tom Larkworthy 2024-07-19 22:36:45 I have build several apps on Observable using these techniques
The things you say are true in that there are numerous annoying frictions, CORS being the worse one IMHO, but still, compared to setting up Jest on a Typescript project, or installing Numpy, these are not deal breakers, and the reactive environment makes up for all that, with surplus IMHO. There is a bit of a complexity/performance ceiling, but I would say that is true of Jupyter and Excel too, it doesn't stop them being incredibly useful internal micro-apps.
There are several open source Observable-likes in existence already
Building something that is good enough IDE/Visual Env/Ecosystem is basically too large a project to pull off. Even Observable hasn't really pulled it off! I don't think saving the state of the notebook is the problem.
Paul Tarvydas 2024-07-20 11:19:22 Level 3 is straight-forward (and âeasyâ) with currently-available technologies:
- use diagram editors that save diagrams in some kind of XML or JSON format (I use draw.io, I use Kinopio, Iâve used yEd in the past), then, use XML parsing libraries or OhmJS to inhale the info
- isolate software components from one another - make each unit of software be completely stand-alone - meaning data and control flow - [hint: closures, queues, avoid function-calling for inter-component communication (level-3 innovation is discouraged by function-calling-think (in fact, at one point, I used mostly Prolog for thinking along with Javascript and Bash and Lisp for clean-up))]
- think of current GPLs - Haskell, Python, Javascript, Rust, Lisp - as just assembly languages for level 3.
(aside: meaning of âeasyâ == Zac Nowicki of Kagi created a draw.io+Odin based Level 3 DPL for me in less than a week)
Tom Larkworthy 2024-07-20 14:24:44 ok, interesting, just a prototype but if you click on the plot it adds a point there. The nice thing is that you can work in the coordinate space of the diagram, not pixels or viewport. Maybe plot can be an input đ¤
observablehq.com/d/e627aaaaa9857257
Dennis Hansen 2024-07-21 20:29:58 FWIW I tend to think the ideal environment allows for all of them: code, dumb diagrams, code backed diagrams, diagrams that are 'code', and everything outside and between. If these coexist in a shared environment, we have the best chance at mutating them and ideally converging on the must useful representations for differing situations. (it is also of my personal opinion that diagrams as code will flourish in this kind of a environment)
Christopher Galtenberg 2024-07-20 19:50:21 đ Luci for dyeing (@zens@merveilles.town)
a huge formative experience happened when I was 16. I was brought into my motherâs office and hired to compell a guy to use his computer who was refusing to use his computer, and exclusively used his IBM selectric.
First up, the guy was an unlikable jerk. However, first thing he does when I get there is refuse to even talk to me about the situation until AFTER I read The Invisible Computer by Donald Norman.
Itâs a good book I cannot summarise in the 50 characters I have left in this post