Here's a lil circuit simulation game that looks pretty neat — ABI-DOS
EDIT: Ended up collecting a handful of these.
Pretty great thing I wish I had through life:
Viewing document history is now easy as scrubbing through a video.
Usable on lex.page
Some lovely explorations of physical paper + augmented reality in this project Feels Like Paper from Lukas Moro
What machines want, by @Dave Ackley. The subtitle " Bottom up engineering for robust-first computing " is more informative. It's about building computing systems that remain robust under scale.
There's an important point there that many people seem to ignore (or deny): deterministic computation (i.e. formal systems) doesn't scale. You can stay small and deterministic, or you can go for scale and then you better watch out for robustness if you care about what your computation produces.
I see this tension coming up frequently in discussions of computational reproducibility. On the one hand, there are people who say that computation is a deterministic tool that we need to keep deterministic by more careful bookkeeping about software components. Others argue that computation is a form of experiment that is never fully deterministic, so tech churn just adds a bit to the already existing noise. Both sides are right, but don't realize that they are talking about different scales of computation.
I believe that the biggest scale at which we should be deterministic is a single small object.
And even that should not even be imperative determinisim, but declarative.
Just a data point for the discussion!
What's a "small object" for you? A Raspberry Pi? A laptop? Something else? And your "should" comes from which perspective? Technical limitations? Human limitations? Something else?
😁 OK, I mean small data object like a contact, paragraph, sensor value, invoice line item, etc. That answer being about data not hardware may impact the list of challenges you would have followed up with, but to answer anyway: my "should" is just me being me, in the same way every "should" is based on personal perspective, and even feeling. And I think none of your "limitations" are involved!
I haven't watched the video (I'm not a video watching type, prefer skimmable text!) but isn't this the prof who does cellular automata related work? That nature of machine paradigm fits my philosophy pretty closely.
I agree that probabilistic emergent computation has the potential to be more energy efficient and ultimately scale better (see the brain). However, its also much more difficult to debug and understand, so in a engineering sense it kinda scales worse because it does not really compose (or decompose) in an easy to grok way. It resists divide and conquer problem solving. Brains are crap calculators.
More generally, I see too many people giving up on making things deterministic early. People forget the database scales incredibly well vertically and that thanks to transactions you can hide all the vulgarities of distributed computation behind a database channel. And thanks to Spanner and atomic clock, its even possible to scale it horizontally with a ton of money if you really need to serve everybody on the planet consistently with low latency. I think the reason we can build out crazy solutions like Spanner is because the deterministic engineering discipline can solve a narrow problem and then fit it into everything that needs it. Emergent computation is slower to evolve perhaps? Maybe an analogue is the written word, you can communicate globally knowledge orally via local-to-local, but its much better faster using libraries and the printing press coz they scale across time and space losslessly.
I agree that emergent computation is a different beast than traditional Turing-style computation. Much like quantum computing. None of these newcomers will fully replace what we have today. I expect us to end up with multiple coexisting computation paradigms.
What doesn't scale is complexity. Growing in size, as in a large database, or high-performance computing, is manageable.
wow so Dynamiclandy but 15 years ago or 10 years before Dynamicland
The goal of this project was to develop a real-time integrated augmented reality system to physically create topography models which are then scanned into a computer in real time, and used as background for a variety of graphics effects and simulations. The final product is supposed to be self-contained to the point where it can be used as a hands-on exhibit in science museums with little supervision.
📝 Oliver Kreylos' Research and Development Homepage - Augmented Reality Sandbox
Oliver Kreylos' research and development homepage. Augmented Reality Sandbox - A mixed real / virtual system where users can create a topographic surface by shaping sand, which is then color-mapped and augmented with topographic contour lines and simulated water using a Kinect 3D camera and a data projector.
WUW / sixthsense - a wearable gestural interface
'WUW' bridges this gap by augmenting the physical world around us with digital information and proposing natural hand gestures as the mechanism to interact with that information. 'WUW' brings the intangible information out into the tangible world. By using a camera and a tiny projector mounted on a hat or coupled in a pendant like device, 'WUW' sees what you see and visually augments any surfaces or objects you are interacting with. 'WUW' projects information to any surface, walls, and the objects around us, and to interact with the information through natural hand gestures, arm movements, or with the object itself.
Rereading Bret Victor on climate change, it finally sunk in that he really really cares about software tools for scientific computing.
I’m happy to endorse Julia because, well, it’s just about the only example of well-grounded academic research in technical computing. It’s the craziest thing. I’ve been following the programming language community for a decade, I’ve spoken at SPLASH and POPL and Strange Loop , and it’s only slightly an unfair generalization to say that almost every programming language researcher is working on
(a) languages and methods for software developers,
(b) languages for novices or end-users,
(c) implementation of compilers or runtimes, or
(d) theoretical considerations, often of type systems.
The very concept of a “programming language” originated with > languages for scientists > — now such languages aren’t even part of the discussion! Yet they remain the tools by which humanity understands the world and builds a better one.
If we can provide our climate scientists and energy engineers with a civilized computing environment, I believe it will make a very significant difference. But not one that is easily visible or measured!
This makes a lot of sense in the context of the last few years of his research! Do you think this implies that the applications of Dynamic Land™ are more domain specific than it’s usually described as?
He objects to the usual way programmers use "general-purpose" and "domain-specific":
They’re typically dismissed as “domain-specific”. This pejorative reflects a peculiar geocentrism of the programming language community, whose “general-purpose languages” such as Java and Python are in fact very much domain-specific — specific to the domain of software development.