Decode — a new tldraw-based augmented coding canvas tool from @Francois Laberge
I just scrolled down to Reactions and these twitter embeds are such a cool way of curating comments. Signed up for the waitlist, ofc
a paper from an architectural journal that combines an ethnography with a sculptural grammar ('situated computations').
this reminds me of how dynamicland held workshops and was a site for the explorable explanations community; also, explorations of weaving practices shared by @Alex McLean
I was happy to host a talk by her about wire bending as algorithmic art youtube.com/watch?v=yWC4p3XwMIo
...
Konrad Hinsen Thanks for the shout out! I am super happy with that paper, despite its flaws, because I finally managed to clearly and succinctly describe how the magic of Objective-S works, and how that leads to a flexible meta-programmable language that can directly express architecture(s).
This has been eluding me for years if not decades, because there’s just a lot of stuff that is also relevant, but not really core.
I hear dynamicland.org will update sometime today..
[moved from #thinking-together, original post by Cole Lawrence]
These archive pages are really cool dynamicland.org/archive/2015/Laser_cut_shelving
Re: project documentation — it seems clever that their approach to project archives was to just email each other, and then build a nice web interface for these emails. Makes me 1000% frustrated that (A) our community is still on Slack, and (B) that so many other great projects run on Slack/Discord. (Silver lining: I'm working on solving A.)
A few reactions.
The crux of the argument seems to be that it's hard to foster literacy in dynamic media (and therefore systems thinking at population scale?) with small individual-scale screens that tend to keep people learning separately. It'll be easier to accomplish if we can learn together, experiencing computational media together in the space around us.
It's not the thing, it's just the thing that will get us to the thing. I sure hope we get to the thing.
One cost of physical things: They take up space. This is something I'm acutely conscious of as I sit in my tiny home in San Francisco watching videos of huge rooms containing dozens of posters -- with no affordance for scrolling.
To the extent that the goal is to help cope with say climate change (I can't link to 6:10 in dynamicland.org/2024/Intro), this seems like a bet that low-density places will be more competitive in the next 100 years. Even as the habitable area on Earth shrinks.
To be fair, I do have tons of unused wall space high up. There's also likely lots of room for such collective spaces to interplay with more private screens.
It's unfortunate that I can't search through almost all of the site. Again, this may be ok. Their primary goal is for the site to be physical. The web version is second class, and that's a reasonable opinionated position to take.
One cost of physical things: They take up space.
I've been thinking about this a bit today. I've seen a few people bring this up.
I think a lot of the way that DL currently "uses space" is an incidental complexity — just like the cameras and projectors are an incidental complexity. It seems like they want dynamic objects, not necessarily big posters and binders and tables full of stuff. Like, I think it'd be perfectly reasonable to assume they'd like you to (eg) be able to buy any old pocket notebook, and for every page in that notebook to be augmented by itself, not needing any extra hardware.
They talk a bit about a change that took place when they set up the bio lab — they got a higher-res projector and upgraded their recognizers, and this let them make their tracking dots smaller, or do away with the dots entirely, so they could use new kinda of objects as dynamic objects . So I think it's safe to assume that they want that trend to continue. So in the long run, there's no difference between the space you already live in, and the objects in it… and the dynamic version of that space and objects.
I'm a physical minimalist but a digital packrat. The reason for the dichotomy: it seems much easier to not make messes in digital spaces. Physical spaces will be messy, particularly collective physical spaces. Doubly particularly, collective physical spaces with kids. So to me all this looks nice in the way a curated shot looks, but I wonder how much it'll hinder learning when you can't find all the space to lay out the next video.
(This is a different point from things looking handmade, which I don't have a problem with at all.)
it seems much easier to not make messes in digital spaces
I dunno - my computer desktop is always a mess 😅 Also, without intentional organising, second brains can be a mess (and often abandonded). I find the infiniteness of digital being a burden sometimes, whereas with physical things/spaces, it can be more human scale and manageable. Marie Kondo taught me that being organised is more a mindset 😛
@Alex McLean it's part of the beauty of live coding. i think you told me that recently :)
Realtalk doesn't work well for one person in a small room in many ways, but community spaces do exist, and hold potential for radical change
You do have to get good at organising bits of paper with it, but there do still exist whole shops full of well developed tech for helping you do that
Wow the projects page only goes to 2016 - I guess it might be a while before the newest stuff is added
Pages after that exist, they're just stubs. Eg, this page about weaving, that I'm super duper interested in (and should probably ask @Alex McLean about) — dynamicland.org/archive/2024/Weaving
heh that's us playing with the fibonacci sequence, which Geraldine uses in her cycloid weaving forcesintranslation.org/cycloid-weaving-spirals-and-a-few-knots-a-link-to-an-online-introduction
Forces in Translation seems like a total jam. I'm going to enjoy sharing this with my partner (who helps run a weaving guild)
I'm enjoying this technical walkthrough of how the website works: dynamicland.org/archive/2023/Front_shelf
This particular activity really speaks to me: dynamicland.org/archive/2024/Archive_editing
Before covid, I had built a few "communal computing" experiences (interactive math and music) using a multi-touch monitor.
But projecting dynamic information on real-world objects is super fascinating!
This one is really pretty: dynamicland.org/archive/2018/RGB_Color_Book
The comment about open-sourcing points to a problem that I see happening in computational science: a growing divide between two factions with opposing views, one saying "all scientists need to learn software development and its tools", and the other one saying "let us do our science and give us tools that are actually appropriate for that".
Needless to say, it's the first tribe that develops all the software, while the second tribe either stays away from computing, or runs code without understanding what it does.
Konrad Hinsen 💯 And this is why LLMs are so promising. They can help make machines more human-like instead of forcing humans to speak the machine's languages.
@Nilesh Trivedi Time will tell... Science requires high-precision communication, between humans and with machines. I wouldn't trust today's LLMs to be reliable enough. But it's still early days.
The best part of dynamicland.org/2024/Is_Realtalk_open_source is pointing out programs may not have source code, something I still don't understand.
The worst part is insisting collaborators be physically in the same place. I hope it's building up to a point where I can interact intensely with dynamic objects in your office and vice versa.
I particularly like the offhand suggestion that "community of practice" = "teachers teaching teachers".
Tak Tran going back to our side thread on messes:
Which always leads me to wonder why the Desktop is the only folder that remembers locations.
This got lost in the transition from OS 9 to OS X. In the classic era, Finder would remember the window position and display settings for every folder, in particular the arrangement of icons (for whenever you switched to icon view). If you put something somewhere, that's where it'd be henceforth.
OS X kinda does this sometimes . It's just a tendency, not a promise.
Never forget what they took from us!
I think what I'm saying is the Marie Kondo approach is a kind of hairshirt you don't need in digital spaces. Just become a hoarder, the water is fine! No, don't drink that water near the door..
Where should I put this idea about Hest?
Minimize your dependencies, Luke. Not just dependencies for your software. Life dependencies.
Here's the topology of my major[1] backup automations. Everything funnels down to one of three places for the most part. Mostly I just search in one place in my laptop.
But I hear what you're saying. Everybody isn't wired to do this. Even digital spaces will struggle when they host collectives. I give up something by being intensely alone with my data, by prioritizing the ability to export over the ability to work with others.
[1] Doesn't show minor details like the periodic manually triggered FoC backups that funnel into X13.
Ivan Reese BTW I'm still thinking about your thread about "what it means for computation to be a property of an object". It's been mysterious to me as well. Bret Victor is often pointing at whiteboards or posterboards or now 3-ring binders that are supposedly the physical embodiment of Realtalk. But I don't get it. How much work is it to scan the code from the binders to wherever it runs on? What makes the server less of a locus of the computation than the binder?
dynamicland.org/archive/2022/Realtalk_binders shows 3-ring binders where each icon is showing just the first few lines of code. I gather you go back to the documents tied with clips on the whiteboard for some of it. So it's clearly not all in the binder. How is it in the object?
One bit of good news:
I hope it's building up to a point where I can interact intensely with dynamic objects in your office and vice versa.
dynamicland.org/archive/2022/Realtalk_binders does talk about "multiple sites", so there's some signs of this at least at the lowest levels of abstraction. Now if I could understand the flow of data/code in this project.. so far it seems like "this lives in the object" is a distracting side-debate.
Here's my understanding of what's going on with the Realtalk source (based on scattered readings over the past day).
At one point (2019-ish), Realtalk was this giant "poster" made up of sheets of paper with fiducials (the dots), each page representing an object (in memory) that could perform some computation. So a few of these sheets of paper, working together, might enable the "identify fiducials" behaviour, by communicating with each other, each of them doing a tiny bit of computation. Some of them might represent some piece of supporting tech, like a sensor (eg camera) or an actuator (eg projector), and act like an API to that thing. One of the things you could do in this setup was point a keyboard object (with its own fiducials) at one of these sheets of paper, and edit the wishes and claims that constitute the computation that that sheet of paper possesses. That means if there's actual """code""" printed on that piece of paper, the code is now out of sync with the real "object" (in memory), but that's fine — you could just wish for a new version of that object to be printed as code on paper, take down the old one, and put up the new one, and now they're back in sync.
Then, in 2020, after the pandemic, they needed a way to take Realtalk home with them. So instead of doing all this on a poster, they condensed it down to a binder. Crucially, the pieces of paper don't always need to be visible at all times — the paper (or any object) is just a physical representation of its own behaviour. So you can pull out the paper and let a camera see it if you want to work on that object's behaviour, but when you're not working on it, you can tuck it into a binder. The object still exists. (I think there's some nuance here, since some objects do need to be visible in order for their computation to run, but I think that's, like, a choice of trigger, to put it in eToys speak).
Now, with Realtalk 2024, they're trying to get further away from the "textual code on pieces of paper with fiducials" bits, by making it so that you don't need (A) the fiducials to track objects, (B) the pieces of paper — bits of behaviour can be manipulated readily by other kinds of representation, eg drawings — and (C) the textual code, or at least not the sort of textual code they've been using thus far, which is basically some existing language (python, JS, C, etc) with a DSL for wish/claim.
Right now, the "server" is a major locus. But I believe they want that to disappear from concern. For instance, electricity generation is a major locus, but I don't think about the design details of my regional power plant(s) and electrical distribution network every time I plug something in to an outlet.
I think Ivan has it mostly correct, except the binder is actually a RealTalk object that contains the other objects within it (that implement the pages). Bret doesn't show it in the making of video for the binder, but I believe there is some process of "moving" an object from a page into a "binder collection object"
If you remove the binder (from the system's visibility), then nothing runs
(well except for the core reactive loop that implements the wishes/claims system)
I think the core model harkens back to Smalltalk - objects all the way down (ie everything you see in DL is modeled by a RealTalk object). What they managed to do with Realtalk-2020 and 2024 was to reduce the core kernel down as minimally as possible (ie now even the hardware interface code in C - ie for cameras and projectors/opengl - is written as RealTalk objects)
In the 2022 update Bret mentioned that the interface to objects is still Realtalk statements (claims, wishes and when's) but the goop in the middle can be Lua, python or C (to interface with hardware/the underlying Linux system)
The folk system from Omar and Andre is pretty similar in idea, so what I wrote above is a combo of how I know they do it in Folk plus a reading of the DL website with that understanding
If you want to see a model of what programming could look like without 'writing' code, take a look at this experiment Omar describes they did with Clause Cards - omar.website/clause-cards
Regarding Kartik's comment - 💬 #linking-together@2024-09-04, I dont think the objection is to small screens, but to the way we do programming today (ie in dead languages with a edit/compile cycle). The analogy I like to keep in mind is the difficulty of doing arithmetic with Roman numerals
From this I get that representations matter when it comes to legibility/literacy, and as computers are a meta-medium that can represent/model anything - why not use them to model languages/approaches that have a lot of bang for your buck
Also I think people confuse what they have now with what they expect to see in the future - ie the current setup with cameras/projectors is just simulating a future where every bit of matter can be imbued with computation
re: clause cards (which was a group project, fyi)
dynamicland.org/archive/2019/Clause_cards
dynamicland.org/archive/2019/Clause_card_music_sequencer
re: working remotely
dynamicland.org/archive/2021/Supporter_maps
dynamicland.org/archive/2022/Distant_Page
re: “it seems much easier to not make messes in digital spaces”
A basic hypothesis of Dynamicland is that we’ve collectively made a big, awful mess in our shared digital space. It’s so easy to sweep complexity under the rug and pretend it doesn’t exist when it’s all neatly packed into the PulseAudio source tree or whatever. Using physical space enforces a discipline of simplicity and understandability. (So it is, in a sense, a “hairshirt”, but we wouldn’t agree that “you don’t need [it] in digital spaces”. Also, it’s a very comfy and pleasant hairshirt. :D)
Thanks Joshua Horowitz, I didn't mean to imply it was only Omar's work
@Naveen Michaud-Agrawal
I dont think the objection is to small screens, but to the way we do programming today (ie in dead languages with a edit/compile cycle).
But there's a difference between say Glamorous Toolkit and Dynamicland.
I imagine a small screen chafes similarly to using a keyboard?
(BTW: I’m just offering the Dynamicland catechism for this. Lots of interesting places to poke & prod into that dogma.)
Similar to how “static types” are not so impactful when you can directly “REPL” your way through the implementation in the living system, is there any summary to why causes (when) + effects (claim / wish) work so well for learning and tweaking existing programs?
Cole Lawrence I think it's because you see the results immediately after updating any code
Also the editor is implemented in the system itself and can be augmented with whatever additional information you need
It's a bit like people who live in Emacs (or as a closer feel - the older system Lisps like InterLisp)
Does folk computer also follow a pattern of cause (like query matching?) and effect? Is that specific language affordance what you are referring to as working better because it is a live system?
Yes, and also using similar nomenclature in the language (wishes and claims). It's built on top of TCL instead of Lua
Somewhat buried in the new DL webpage — you can now directly donate to support their work. This is absolutely the sort of thing I'd back on Patreon, so I'm glad to see they offer those sorts of $n/month options.
No coding here, but this is fantastic.
And yet even now his father hovered in the background both as a rhyme and a presence. The careers of both men had been redirected by a simple question posed in a college class. Both spent their lives measuring the stress in stone. Both used scientific methods to answer questions that had seemed to everyone else beyond the reach of science. Both sought to understand what prevented roofs from collapsing. The father’s work had received a lot of public attention and the son’s had not. But that was just an accident of what people cared about. A lot of people cared about Gothic cathedrals; fewer were concerned with whatever was happening to workers deep underground.
The story about the roof bolts is hugely well told, as is the framing overall. To convey the serious moral outrage that anyone else should take credit for these discoveries, and the technology transfer therein, given that Chris Mark of this piece is so humble. That's an incredible piece of writing.
Looking at SPLASH '24 program, I knew LIVE workshop is always very relevant (this year majority of lectures involve folks here); but also found PAINT which sounds great too 👀
In the workshop on Programming Abstractions and Interactive Notations, Tools, and Environments (PAINT), we want to discuss programming environments that support users in working with and creating notations and abstractions that matter to them. We are interested in the relationship between people centric notations and general-purpose programming languages and environments. How do we reflect the various experiences, needs, and priorities of the many people involved in programming — whether they call it that or not?
Areas of interest to PAINT include but are not limited to:
PAINT format is flipped, kinda critical review, and not recorded IIUC. But the past papers look interesting.
A fun little rollercoaster of computing history shared in this short blog post by Graydon Hoare, looking at the lesser known Xerox Alto descendant, the PERQ.
There’s a bunch more CMU in Apple’s history, much of it via a pipeline of people from CMU to Apple’s ATG.
📝 Apple Advanced Technology Group
The Advanced Technology Group (ATG) was a corporate research laboratory at Apple Computer from 1986 to 1997. ATG was an evolution of Apple's Education Research Group (ERG) and was started by Larry Tesler in October 1986 to study long-term research into future technologies that were beyond the time frame or organizational scope of any individual product group. Over the next decade, it was led by David Nagel, Richard LeFaivre, and Donald Norman. It was known as Apple Research Labs during Norman's tenure as VP of the organization. Steve Jobs closed the group when he returned to Apple in 1997. ATG had research efforts in both hardware and software, with groups focused on such areas as Human-Computer Interaction, Speech Recognition (by Kai-Fu Lee), Educational Technology, Networking, Information Access, Distributed Operating systems, Collaborative Computing, Computer Graphics, and Language/action perspective. Many of these efforts are described in a special issue of the ACM SIGCHI Bulletin which provided a retrospective of the ATG work after the lab was shut down. ATG was also home to four Apple Fellows: Al Alcorn, object-oriented software pioneer; Alan Kay; Bill Atkinson; Donald Norman; and laser printer inventor Gary Starkweather. Further, ATG funded university research and, starting in 1992, held an annual design competition for teams of students. Apple's ATG was the birthplace of Color QuickDraw, QuickTime, QuickTime VR, QuickDraw 3D, QuickRing, 3DMF the 3D metafile graphics format, ColorSync, HyperCard, Apple events, SK8, AppleScript, Apple's PlainTalk speech recognition software, the 1986 Möbius ARM based computer prototype, Apple Data Detectors, the V-Twin software for indexing, storing, and searching text documents, Macintalk Pro Speech Synthesis, the Newton handwriting recognizer, the component software technology leading to OpenDoc, MCF, HotSauce, Squeak, and the children's programming environment Cocoa (a trademark Apple later reused for its otherwise unrelated Cocoa application frameworks).
There's not much online about it, but it sounds like it was a LISPY system
It was Lisp-based and shared many properties with Boxer, &c. Mikel Evins worked on Dylan and SK8, and has written about at various places on the web.
The Causal Islands Berlin conference (organised by me, Boris Mann, and Jack Rusher) is happening next month (Oct 4 & 5). Would love to see some of you there. It'll be quite small (100-ish capacity) but loaded with great talks and conversations. We'll also be doing a more full-size conference in May next year. The vibes are "future of computing" and "spiritual successor to Strange Loop" with a more socio-political bent.
Also, there is a CfP up if you want to submit a presentation!
(I'm aware this isn't the perfect channel for this, but #in-germany felt too quiet... happy to move it if desired)
Sounds tempting but four weeks advance notice is unfortunately too short for my agenda these days!
Not sure if this link will work, but there's a Discord: discord.com/channels/1073278385750552596/1233076084086542396
greetings! am thoroughly enjoying the latest podcast episode on agentsheets (futureofcoding.org/episodes/073), and the discussion of “is programming a Language(TM)” and the history of how it came to be that programming was considered language in the sense of “has a formal grammar” reminded me of this paper by Ian Arawjo on the history of programming notation and its cultural referents (e.g., typewriters, and how it moved away from more “visual” forms and converged around “programming as typing on a typewriter”: ianarawjo.com/docs/To_Write_Code_Arawjo_CHI2020.pdf
i found the resulting frame of programming as translation work of “mapping one culture to another” provocative too!
This looks super interesting! Definitely adding it to my list to read for potential future episodes. Thanks for sharing!