You are viewing archived messages.
Go here to search the history.

Nilesh Trivedi 2024-09-16 17:32:59

Stephen Fry on technology and AI:

Machines are capable of bias, hallucination, drift and overfitting on their own, but a greater and more urgent problem in my view is their use, abuse and misuse by the three Cs . They are Countries with their specific ambitions, paranoias, enmities and pride; Corporations with their unaccountable rapacity and of course Criminals . All of them united by one deadly sin: greed. Greed for power, for status, for money, for control.

Mattia Fregola 2024-09-17 00:09:22
Tom Larkworthy 2024-09-17 07:34:31

I got test-driven-development working with AI ( o1-preview ) and it is totally nuts. It can do complex stuff, I am making serious progress with a decompiler with it

observablehq.com/@tomlarkworthy/ai-written-decompiler. The key was feeding the test suite results back into context (plus o1-preview's ability to improve code without forgetting half the stuff in the middle)

Tom Larkworthy 2024-09-17 07:35:35

I am mostly prompting "get the tests to pass" and letting it either fix the tests or the code.

Konrad Hinsen 2024-09-17 12:34:17

I just read this very AI-skeptical article, that basically says that today's generative AI has no credible business model and is unlikely to improve significantly enough to get one. While I am aware of counter-arguments to the technical aspects, I wonder if there are more positive takes on the financial/business aspects, coming from anyone else than AI vendors.

Daniel Sosebee 2024-09-17 14:11:52

Carl Shulman’s perspective is interesting - I think he’s someone who really takes the “look at the trends and then keep extrapolating” thing seriously, which leads him to very extreme conclusions about the expected impact of AI. Could be a bit of a different discussion though, as he’s not talking about today’s models. Anyways here’s a very long interview with him. 80000hours.org/podcast/episodes/carl-shulman-economy-agi

Konrad Hinsen 2024-09-18 05:09:46

That's extrapolation with a big dose of speculation. We have no idea for now if AGI will ever happen. So it's not a basis for today's business plans. Even if AGI happens, and even if it happens soon, it's probably neither OpenAI nor Anthropic that will benefit economically from such a development.

Kartik Agaram 2024-09-21 02:48:27
Nilesh Trivedi 2024-09-21 05:18:54

Alan Kay pulls no punches:

When you put a person into a car, their muscles wither. You put a person into an information car, and their thinking ability withers. I wouldn't put a person within 15 yards of a computer unless I was absolutely sure that it was a kind of a bike for them.

...

A lot of technology is just what I call inverse vandalism, which is people making machinery just because they can.

Ivan Reese 2024-09-21 05:49:47

2016!

Screenshot 2024-09-20 at 11.49.30 PM.png

Arvind Thyagarajan 2024-09-21 08:05:25

Wonderful read of a wonderful response. Makes me reflect on the mechanics and context of the (possibly exciting, possibly mundane) work that I do. Always love the bicycle analogy as it deftly sidesteps thickly disguised "are you a Luddite?" insinuations :-)

Kartik Agaram 2024-09-21 17:35:26

This great article seems related.

Perhaps the end-game here is the things we share through relationships cannot involve money. I realize I've been aimed in roughly the direction of this thought for 10 years or so. Separation of Mammon and Muse. A parallel, complementary trade network.

It does require flattening the privilege curve a lot more, though.

📝 The Collapse of Self-Worth in the Digital Age | The Walrus

Why are we letting algorithms rewrite the rules of art, work, and life?

Jason Morris 2024-09-22 01:24:01

I want to understand the car/bike analogy better. Is the distinction between things that amplify the outcome of human effort as opposed to things that eliminate the need for it? Because unless you are drawing a line at an amplification level of "infinity", that seems like a distinction of degree, not type. What degree counts? Or does it have to do with some sort of judgement that a person who is a good driver is a worse person than a person who is physically fit by virtue of riding a bike? So our categories of bike/car are tied to what we think it means to be a good person or live a good life? We have to be political about our computing tools, and generate tools the use of which results in "better" people? And ignore people when they say "please make this easier for me" if doing so would atrophy something that we think is more important than they do? I'm very sympathetic to this idea, because I am worried about the effect on legal reasoning capabilities in people from some kinds of generative AI, and also excited about the way some forms of symbolic AI force you to think more clearly. But I don't know how to draw this distinction in a way that isn't arbitrary.

Stefan Lesser 2024-09-22 08:21:36

This seems another good resource to add to the mix: blog.ncase.me/the-creative-cyborg

📝 The Creative Cyborg (my XOXO 2024 mini-talk)

On how we may re-design AI to enhance artists, not replace them

Dave Liepmann 2024-09-22 08:38:43

Because unless you are drawing a line at an amplification level of "infinity", that seems like a distinction of degree, not type.

I don't agree. Transportation experts categorize even the most sedate bike riding (a Dutch bike on a flat path) as active mobility and driving a car or taking the bus as passive because riding a bike involves a kind and quantity of physical movement that is simply not present in driving a car – exercising balance by leaning into turns, getting a baseline amount of leg-pumping, occasionally needing to exert hard force to, say, catch a light. This sounds pretty mild but produces a multitude of cascading health effects. I'll even take the bait and say you should steel-man your fitness-as-virtue argument: we're analogizing physical health to mental capability, not as an arbitrary dimension of person-worth, so it makes complete sense in this context to prefer the healthy option.

Kay also specifically calls out the quality of being able "to go flat out with [the rider's] body". This scaling-up-with-your-effort ability of the tool is especially enlightening for the analogy: programming languages have it, spreadsheets have it, scientific calculators kind of barely have it, whereas dumb calculators, CSV, and most low-code tools do not have it. The former accept almost arbitrarily complex mental input and do work on it that feeds a feedback loop, the latter force you into a narrow band of input and often only amplify it in specific predetermined directions.

We have to be political about our computing tools, and generate tools the use of which results in "better" people?

We should steel-man this line of thought as well. I think you're reading it precisely backwards, as authoritarian control of mental tools. To that we would say no. It's about enabling people to be better by giving them more power over the tools we build, to which we should say YES! Yes, when we build tools we should allow people to do more things than we specifically intend with it. It's good practice in software and perhaps even a moral quality demanding it for end users. Think of how valuable it is to have escape hatches like passing a function argument to define behavior in someone else's code, or to have macros available to create DSLs, or to occasionally be more verbose so to eke performance out of a hot loop, or to have a programming portal in a GUI to provide a more nuanced side channel.

📝 Programming Portals

Small, scoped areas within a graphical interface that allow users to read and write simple programmes

Joshua Horowitz 2024-09-22 10:08:21

@Dave Liepmann:

I think you’re reading it precisely backwards, as authoritarian control of mental tools. To that we would say no. It’s about enabling people to be better by giving them more power over the tools we build, to which we should say YES!

In general I think the argument you’re making is great and I agree with it.

But: Just for the record, not everyone says “no / yes” in this particular order. Sticking with our beloved bicycle example, Ivan Illich argued in “Energy and Equity” that vehicles shouldn’t be allowed to travel faster than bicycles: “Participatory democracy demands low-energy technology, and free people must travel the road to productive social relations at the speed of a bicycle… Beyond a certain speed, motorized vehicles create remoteness which they alone can shrink.”

I don’t know how convinced I am by that specific argument in that situation. But I do think it’s important not to assume that a free-market “build the right empowering thing and it will push out the bad things!” solution is the only option we have available. It’s nice when that works, but sometimes you have to actually do politics, and negotiate agreements as a society about how things should work. I don’t think that’s the same thing as “authoritarianism”. (Naturally, minarchists do think these are the same thing, but I imagine many of us have less radical views than that.)

Dave Liepmann 2024-09-22 16:37:47

True – I pose the "give people powerful tools" argument only in the context of tools for thought

Jason Morris 2024-09-22 16:42:30

Walking is active transportation, and you have to walk to a bus stop. Busses amplify the effect of that walking. So I still don't see that distinction. I do see the all-out distinction, and I intuitively care about it. And for people who are explicitly seeking to build things that help people grow in certain ways, it's great that such things exist. But I don't see where we can get any sort of universal moral preference between those tools and tools that make a certain kind of effort unnecessary. I have a neuro-diversity that makes certain kinds of mental activity inherently difficult for me. Is it wrong to build me a tool that allows me to not have to do that sort of thing? Does a "to do" feature atrophy human memory? The idea that this all-out idea can be a universal preference seems absurd. You "should" also design for accessibility, and designing for growth can be mutually exclusive.

Kartik Agaram 2024-09-22 18:16:08

My version of (I think) Jason's point: were calculators a mistake?

Dave Liepmann 2024-09-22 18:21:31

I'm not sure we're talking about a universal preference. I'm think I'm talking about a preference for a quality to be present in certain kinds of tools, and the preference grows the nearer the tool is to a certain category which includes tools for thought and sources of truth.

Maybe we're having trouble because this quality is vague: "[being] a material to be shaped", extensibility - interactivity - malleability...that it can be more than just itself. That it has no arbitrary restrictions on its participation in the network of all other tools.

Does the "to do" feature support being shaped to the user's needs?

(Is it true to say that dumb calculators only do rote tasks that we don't feel needs to be shaped beyond the supported functions? Or I suppose someone who wants programmable functions has the option to get a scientific calculator, so the question feels moot.)

Kartik Agaram 2024-09-22 18:47:45

@Dave Liepmann yeah. I'll state the extreme version of this question just because somebody should:

If we start using the computer to simulate the computer, does some human capability wither?

(💬 #devlog-together@2024-09-22)

I'd say no because I do it pretty poorly anyway. But I can't really reason about it without referring to what I'd rather be doing. And there's a fundamental incommensurability between humans there. We want to spend our time doing different things.

Human beings have always used technology to compete for agency. Gain agency for the things you want to do, even if it takes away agency from others, or causes others at large to stop doing some things that they might otherwise have found pleasurable and worthwhile.

I think I agree with Jason's original point that it's a matter of degree, not type.

Jason Morris 2024-09-22 18:55:47

I don't think the malleability property is fundamental to the car/bike idea. A bike cannot be molded into something else. Moldable things are one way to achieve bike-ness, but not the only way. That said, I'm persuaded that bike-ness is less frequently sought out in user-facing tools than perhaps would be ideal. "There should be more bike-ness" is a lot easier claim to swallow than "cars are bad, only make bikes."

Kartik Agaram 2024-09-22 19:13:09

For me this conversation also connects up with my comment above about the parallel trade networks of Mammon vs Muse.

When you make a film you derive some pleasure from the act. That pleasure is separate from the pleasure I derive from watching it, or from the money I pay to watch it.

For the watcher, there is also something allegedly nice about access to an infinite stream of videos that provide some ineffable quality one can't articulate. (i.e. Tiktok)

It's plausible that the producer's desire is not more elevated than the consumer's.

AI is potentially a force to decouple these 2 trade networks. Do what you do for pleasure of the act, or for the extrinsic payoff. We may be forced to choose between the two at all levels of granularity.

Of course, the addition of this force to the world requires us to come up with countervailing forces:

  • To spread the word about the joys of creating beyond the resulting artifact.
  • To spread the leisure around so everyone is able to participate in the Muse network.