I am giving a talk on boundary objects this Fri for the Dynamic Abstractions reading group, at 9am PT. I've linked couple readings from CSCW (Ribes on domain, and Star on good ol' fashioned AI) and my BELIV paper. dynamicabstractions.github.io/reading-group
In the meantime - we've got four great talks on the ladder of abstraction, constructivism & Building SimCity, and the design of intelligent interfaces & systems. Ian Arawjo's latest about how notations emerge is solid gold if you're thinking about non-code coding interfaces. (40min video + 20min q&a, through the link, scroll down.)
Before I watched that, I expected it to have something to say about psychology.
It was a fine talk, but to me the major questions are more about how notations expand our cognitive surface area if you like — how many ideas of what complexity can we entertain at one time.
What I’m thinking is something like: the right notation employed by a trained perceiver is on a similar level to biological memory in making something available to cognition, and even has advantages over memory — I will draw diagrams to show relationships between even a small number of things that are immediate objects of mental concern. How does the external representation help?
I think in this case, we are offloading what would otherwise be deliberate and burdensome processes to more parallel and subconscious processes.
What I kind of hoped this was before I watched it was someone cataloguing and organising the different ways we do that in different fields, to try to come up with a unifying theory. This would let us develop new and better representations.
This is just what Playfair did — he harnessed our ability to compare shapes to facilitate comparing numbers.
Before watching this, I assumed that I would agree with @Guyren Howe . After watching, I’m not so sure. I see this as a brainstorming exercise on the various issues of 2D notations. I think that the discussion misses the elephant in the room - the mediums upon which the notations are created. Most of the notations discussed were based on the assumption of the use of quill and papyrus - 2D media. These new-fangled things that are popularly called “computers” are something more than paper. They can represent notations in 4D (x/y/z/t). Blender - I witnessed a colleague solve an otherwise hoary problem (optical distortion to produce a flight simulator display) using Blender in an evening or two. Audiophiles talk about “sound stage” (which I’ve witnessed first-hand) that is not adequately described by our 2D notation of sound waves. Notation influences thought and restricts what we can think about. Feynman diagrams are a good example of this - Feynman needed to break away from Gutenberg type-setting notation in order to think about and describe certain principles in physics.
I wasn’t really criticising that piece as such — what it said was fine; it just seemed a little poking at the edges rather than the heart of things.
@Guyren Howe Yes, absolutely - Judy Fan's research is great for the cogsci angle on notation. However, they've done more with sketching & logograms than with (say) box-and-wire, or mathematical notations, or other advanced forms of visualization literacy.
As far as unifying theories go, I think Latour's theory of inscription might be your go-to. I've been using Circulating Reference in Pandora's Hope, but it's a recurring concept over in STS.
give Klein 2001 a look, I like her definition of paper tools a lot
google.com/books/edition/Tools_and_Modes_of_Representation_in_the/AhorBgAAQBAJ?hl=en&gbpv=0
The mode of representation, made possible by applying and manipulating specific types of representational tools, such as diagrammatic rather than mathematical representations, or Berzelian formulas rather than verbal language, contributes to meaning and forges fine-grained differentiations between scientists' concepts.
I actually think a trap some folks are in here is to think that there is or should be one answer. That there should be a kind of visual programming that replaces textual grammars, say.
I’m rather more inclined to think the work should be identifying the useful abstractions and representations of them and allowing us slice and dice programs as the need of the moment arises.
I’d particularly like to augment my IDE with other ways of viewing my code. Text is pretty good for functions and procedures, but forcing me to only view the larger structure of systems through the filesystem tree is to me just obviously wrong.
An illustrative example: all of the data that the analyser and tracer can tell me about my code and its executions might be presented to me as a set of relations, and I can query it using Datalog.
Combined with suitable functions, this would give me a pretty clear and simple way to ask “what call paths are available that call this function, and that along the way execute a database query?”
Why, in my statically typed language, is there no way for me to see graphically all the call paths?
(Hell, a little off-topic, but I’ve no idea why all languages don’t have time-traveling debuggers at this point)
I believe this would mitigate strongly in favour of statically-typed languages.
Our development environments are so timid and retro. I’ve no idea why.
all of the data that the analyser and tracer can tell me about my code and its executions might be presented to me as a set of relations, and I can query it using Datalog
I built one of these for Dendry, so that narrative designers could use it without learning static analysis per se
Time spent building an IDE for yourself is time well spent, but time spent making it usable for other people is a heavy opportunity cost, so they tend to come out pretty dang vanilla. No shade to ChainForge - I'm very impressed with the level of community adoption there
(I feel a bit of a rant starting here: code formatters make no sense to me. The AST should be the point of an IDE. I should be able to have my own presentation of the code with short lines and two-space tabs, and you can have 4 spaces and 120-character novels as your lines)
(I’ve not used an IDE whose debugger even tries to deal with asynchrony. At a minimum, I should be able to manage multiple simultaneously active breakpoints, possibly in separate windows.)
(I’ve no idea why time-traveling databases aren’t more popular/standard.)
(Why can’t I see swimlanes or other types of graphical presentations of the execution history of my program, or the available call graph?)