Macros and optimizations: it's just a phase: The one where I implement macros and optimizations as a sequence of evaluations with different semantics
💁 is this 🦋 homoiconic?
I posted a longer version of this above, but I don't think I introduced it very well! It's a stab at Stop Drawing Dead Fish in VR with spreadsheets and something called Geometric Algebra (GA) also known as Clifford Algebra.
Those who have scrutinized Stop Drawing Dead Fish(SDDF) closely might have noticed two references to GA. Bret also used to have a "Geometric Algebra" sticker on his laptop! But while GA is a very Bret thing, he actually did not use it to make SDDF, so my thing is trying to make good on that.
GA is a mathematical system where you get a bunch of geometric objects and transformations, and they all get related to one another by math that is quite a lot simpler than usual. For example, in the conventional/non-GA approach, if you wanted a line in 3D space you'd take two "vectors" v1 and v2 and say the line is the set of vectors v1+t*v2 for all scalars t. This can be kinda useful, but gets complicated if you ask for a simple thing like a rotation around that line. In the GA way of doing things, instead of being "a set of vectors", a line is its own sort of object - lines can be added together; multiplied by planes and rotations; etc. Lots of useful operations turn out to be examples of this, I've attached a pdf of examples.
"doing math/programming by working directly with (tangible/visualizable) geometric objects instead of with linear equations" should strike you as a centrally Bret thing. But why didn't he use it? I'll say in a comment under this message.
Bret got unlucky, I got lucky.
Geometric Algebra has gone through several forms. The person Bret read was David Hestenes, a slightly eccentric but very smart guy (ex student of John Wheeler alongside Hugh Everett, Kip Thorne, and Richard Feynman). Hestenes is motivated by physics, and to him there were two particlar algebras that were most interesting: Cl(3) which does rotations and reflections around the origin (and TLDR is isomorphic to the Pauli matrices of electron spin), Cl(1,3) which does special relativity and the Dirac equation, and Cl(4,1) which does conformal transformations of 3D space (and to a lesser extent Cl(4))
For a physicist, Hestenes was a pretty Tools For Thought guy. The thing he wanted to emphasize was that Dirac had a strong visual intuition, and he wanted to bring that back into physics. The pauli algebra is usually thought of by physicists as 2x2 complex matrices, but this really does hide what's going on a lot - the basis matrices look completely different from one another, even though they are, geometrically, orthogonal reflection planes through the origin - very visualizable.
Alright, that's physics - what does he have to offer to, say, computer animation? Well, he presented Cl(4) and Cl(4,1) as being useful. But 4,1, called "Conformal Geometric Algebra", is 5 dimensional and has a proliferation of pretty weird transformations and objects - even for 3D it is overkill, and Bret was interested in 2D. The other, Cl(4), works a bit better, but causes things to curve in strange ways!
On top of that, Hestenes talks a big game about visualizability and understandability - but he is not actually all that good at explaining things, and he has aesthetic preferences that make him uninterested in a lot of what computer animation needs
Hestenes presented Conformal GA at the same time that a guy called Jon Selig presented something called Projective Geometric Algebra - that's what I use. This was in 2000, but Bret never saw it - why's that?
Well, Jon Selig's work was ignored in GA, because it went against Hestenes' aesthetic preferences - specifically, it has a basis object in it which squares to 0. Selig left GA after that reception.
Nobody paid attention to Selig's work until 2011, when it was picked up by Charles Gunn. Charles is a guy that I can see Tools For Thought people liking - he is ex-Pixar, and for him mathematics and psychology blend together.
But Charles was also not great at communicating. When Bret was working on SDDF, Gunn's and Selig's amazing discoveries were still locked in his very difficult PhD thesis
I got lucky - by the time I got interested in 2019, a man called Steven De Keninck had created Ganja.js, an excellent shadertoy-esque system for playing with projective GA enki.ws/ganja.js/examples/coffeeshop.html
In my opinion, projective geometric algebra is a major new step for mathematics and computer animation! And if only Bret had known about it in 2014 you and everyone else would all be using it today, 100%
I'd like to use it in my various computer graphics experiments. I'm not a mathematician. I'm cool with vector math. Matrices as spooky. Where's the smallest, easiest to absorb introduction to coding this stuff?
I will toot my own horn a bit first:
youtu.be/en2QcehKJd8 is part 1...
Disadvantage: two hours long, I am afraid nobody has gotten it into less than this
Advantage: you get an intro to quaternions for free in the first 10m and you get to see a different Tool For Thought I made for this
Alternative! youtube.com/playlist?list=PLsSPBzvBkYjxrsTOr0KLDilkZaw7UE2Vc
At the end of this series you have a demo in Ganja, another GA tool-for-thought, of writing a 2D physics engine in 22 minutes. And then... the guy goes to the top of the file and changes "2" to "3", and it's a 3D physics engine... and then, he changes "3" to "4"...
Wow, fantastic work. I’ve been GA curious since hearing Jack Rusher ‘s talk on it, but have yet to tinker with it. I thought it would be cool if it could enable custom coordinate systems that appear on a single canvas while still enabling objects in each to interact with one another. In any case, really cool to see the ideas being used for constraint systems like this. Especially in VR
FTR: Here is the demo video I presented earlier today. I've added links, in the form of a Kinopio page, to the other technologies that I didn't demo.
Exploring Techniques and Notations for Augmenting DX
This looks really cool, Paul. Are these synchronous dataflow? Meaning nodes execute once when they have received data at all their input ports?
Thanks!
This is opposite to "synchronous dataflow". Nodes execute once for every input. A node can implement "synchronous dataflow" if it wants to, but, this is not a fundamental requirement. When I used to design hardware, I found that I could do so much more reliably (e.g. 0 defects in the field, guarantees, deliver-once instead of continuous delivery, utterly asynchronous, etc.) than when I switched to designing software. I believe that overkill-synchronous-thinking is a major cause of bugs and I want to find ways to break out of that mindset.
Thanks for posting this video! Question: what's the level of granularity of your diagram notation? Put differently, how is the operation of each node defined? By another diagram, by traditional code, or yet something else?
There are 2 kinds of node. Containers are recursively defined - they can contain Containers and Leaves. Leaf nodes contain code and are not recursive.
In analogy, this is much like Lisp lists. Lists can contain Lists or Atoms. Atoms are the bottom.
Containers run/loop in multiple steps. A Container is "busy" if any of its children is "busy" (recursively). Leaves run in one gulp.
A Container can inhale a single message from its input queue only when it is not busy.
Routing of messages between children is performed by the Container, not the children. Children cannot know where their inputs come from nor where their outputs are sent to. A Container cannot know what kind of component each child is and may compose a mix of child components of various kinds.
In analogy, Containers are like "main loops" in windowing systems, except that it's turtles all the way down - a "main loop" might contain other "main loops" and so on.
In analogy, a Container is like a Unix command-line command. Containers have several stdins and several stdouts. You can't tell from the outside (nor do you care), if the command is a bash script or a lump of C code. But, it is done much more efficiently than using Unix processes (think: closures and OO queue objects).
In this way, you can structure a system in layers that elide details. The details are all still there, but the reader is not forced to understand every niggly detail unless the reader wants to dig deeply.
I was wondering why asynchronous dataflow leads to less bugs the synchronous dataflow? Can you elaborate on this Paul Tarvydas ? 😊
My guess, my gut feel: simplicity. Asychronousity allows me to use divide-and-conquer and to solve-problems-and-implement components in small pieces, whereas building software is like crafting an intricate Swiss watch with 100’s of tiny gears. If a tooth breaks in any of the synchronous gears, the whole thing doesn’t work. If an async component breaks, I can isolate it and focus on it and fix it. It ain’t inherently more reliable, but, I can fix things easier and better. The simplicity of asynchronous design is like using LEGO blocks - I can imagine and implement much more interesting (aka “complicated”) apps using software asynchronous blocks. [aside: today’s “code libraries” are not LEGO blocks, they must be used in a synchronous manner, it’s synchrony all the way down]. [aside, knowing hardware, I see function-based programming as an inefficient use of CPU power, requiring extra software to support the function-based paradigm (note the use of the term “function-based” which is a superset of what we call “functional programming” today).
Great points Paul Tarvydas. Do you have any experience with LabVIEW? It’s a very structured visual programming environment. Just in the last four or five years, they introduced asynchronous data flow wires. They started working on an asynchronous diagram, but that was part of a new, next generation platform that was mothballed.
I've looked at LabVIEW but haven't used it. Feel free to educate me. @Jim Kring
Paul Tarvydas While I agree with all your observations, I am not convinced by the explanation. My own speculative hypothesis for the relevant difference between hardware and software is Turing-completeness leading to chaotic dynamics and thus an infinity of failure modes (see here for a more detailed argumentation). But I am not that convinced of my own hypothesis either.
In my demo, I made the statement "... t2t doesn't need the full power of OhmJS ...", but, I didn't clarify.
For t2t - text to text transpilation - primarily, you need to pattern-match incoming text, then emit text based on the input.
OhmJS parses incoming text, then gives you the full power of JavaScript to do anything you want with the parse tree.
For t2t, you don't need to resort to class hierarchies, functions, closures, etc., etc. You primarily need to pattern match, then, create and modify text. In addition to OhmJS' ability to pattern-match, Javascript's "template strings" are about all you need - the ability to create text and to interpolate text from the tree walk of the parsed input.
This seems to be unnecessarily restrictive, but, turns out to be quite powerful and mind-freeing. Fewer options -> less clutter -> increased ability to think about interesting issues. After all, "simplicity" == "lack of nuance", and, my goal is to simplify DX.
[Infrequently, one needs to do a tiny bit more (like gensym() a new symbol and leave it on a scoped stack for use during the tree-walk), so I provide a way to break out and call a Javascript function, but, this kind of power is not needed in most cases. I guess that, in the future, I will restrict this some more, but, I'm still experimenting].
Given this simplification, I easily invented a nano-DSL to handle the string building bit. I call it RWR (for ReWRite). RWR is, itself, just t2t - it transpiles the RWR spec into Javascript that is compatible with OhmJS.
What are your use cases for t2t? Code transformation, data transformation, or both?
I’ve used t2t for both, but, emphasize code transformation because I feel that the idea of code transformation is under-utilized. It drastically changes the realm of compiler writing. One can create new languages, but does not need to write whole compilers (simply lean on existing compilers). When one can create new languages in minutes/hours instead of months, it changes one’s approach to problem solving, e.g. one can create multiple nano-DSLs on a per-project basis (“awk” and REGEX on steroids) instead of building general purpose languages. It makes it reasonable to create S/W Architecture languages that describe Design Intent instead of Implementation and Production Engineering. To me, Python, Common Lisp, Javascript, Odin, (Haskell, Rust, …), etc., are just assemblers for HHLLs (higher-than-high-level languages). This is like Lisp macros and Functional Programming done by pipelining instead of cramming all of the concepts into a single hair-ball of complexity.
What I find most attractive about t2t for code is that I can look at the intermediate code. The idea of taking many small steps towards the goal rather than a big obscure one sounds tempting (though I haven't ever done multi-step t2t).
There are side-benefits, too. Like, if you own the transpiler, you can easily insert tracing/debugging/instrumentation tidbits. Like, "macros" for textual languages instead of only for list-based languages (like Lisp, Scheme). A down-side is that, to really do t2t in small steps, you need to emphasize machine-readability (easy to do), but, machine-readable code not= human-readable code (machine-readable code is more verbose and repetitive, but, understandable to humans, albeit boring and TL;DR). FYI at one point, I got up to 15 steps in building a Ceptre-to-Prolog transpiler before I veered off in some other direction. I would be happy to kibitz if anyone wants to try out the stuff I've got - I imagine that it ain't packaged in pristine shrink-wrapped form yet...
I’m close to publicly releasing an AI powered coding assistant for LabVIEW, a visual programming language. Here is a teaser: linkedin.com/posts/jimkring_labview-sparkles-is-an-ai-copilot-for-activity-7212597622111449088-VQKe
I did a four hour livestream performance art piece where I explore the nature of recursion and infinity and time. And it starts with me doing some crappy live coding