You are viewing archived messages.
Go here to search the history.

Marcelle Rusu (they/them) 2023-08-28 14:04:13

For some time I've thought that type systems are essentially ad-hoc logic languages. So.. why not use a full on logic language (prolog or something) to statically analyze your codebase instead of keep patching onto a type system, it appears to me that logic languages are simpler & more expressive than most type systems. I'm starting to work on this for my language, and creating a logic language in efforts to also understand logic programming better.

Another advantage I can think of is that if a the logic language is expressive enough to type even the most advanced languages as well as the weakest, it could be used as a shared interface between languages to understand at least parts of each others type systems, as well as be a tool for user extension for additional static analysis specific to their project.

I'm basically thinking something like this.

# here's a sample program in my (object oriented) language

fn add(a, b) = a + b

add(1, 2)

generated logic program in pseudo syntax ( #A is a pattern to check value is type of A , ?a is like prolog's A , {...} is map literal.)

-- preloaded facts

lookup(#Number, :+, {args: [#Number, #Number], ret: #Number}).



-- program

add(?a, ?b, ?ret) :- lookup(?a, :+, {args: [?a, ?b], ret: ?ret}).

?- add(1, 2, ?_).

The specifics of my logic language is likely flawed, but I'm curious about the general idea of using a fully-fledged logic language to generate a program in & run instead of generated a typed AST and running ad-hoc type systems.

My question is, what are people's thoughts on this? Any previous work in this area? Any obvious flaws?

Paul Tarvydas 2023-08-28 14:57:50

Massive agreement. Wish: I wish that type-checking was built BESIDE languages instead of INSIDE-OF languages. Observation: While sitting through a reading of PLFA (Agda) [with CompSci Cabal], I developed the distinct feeling that I was looking at 2 languages, one for programming a type system, and, one for programming an implementation (an after-thought in Agda, IMO) ; to me, “Dependent Types” is just more-programming-language-constructs for a type language (DT adds parameters) ; Agda, et al, seems to be inching towards the re-discovery that type systems can be / should be programmed. The benefit of Relational Languages (PROLOG, miniKanren, etc). is that they specify “what is wanted” (relations) instead of “how to create steps for achieving what is wanted” (3GLs like Python, Rust, Haskell, etc.)

Marcelle Rusu (they/them) 2023-08-28 15:17:39

To me type systems are in the awkward position of trying to squeeze into an existing language (get out of the way), but also be powerful enough to express complex things like DSLs.

Type systems have to all sorts of optimizations to make unification manageable or querying the codebase.

Where it seems to me that these are solved problems - databases.

I imagine such a logic system that learns from the many decades of high performance databases, could be very fast (think indexes, query caching, materialized views etc.).

Justin Blank 2023-08-28 16:13:50

Rust embeds a datalog system in its type checker for borrow checking, if I’m not mistaken. github.com/rust-lang/datafrog

Konrad Hinsen 2023-08-28 18:51:39

The only example of a language with a "sidecar" type checker is Shen (shenlanguage.org). It's a Lisp that includes a logic engine (roughly Prolog with lispy syntax), which is used to implement the type checker that can be turned on and off as preferred. And the type checker is just another Shen program, so nothing (except the effort) prevents you from substituting your own.

📝 The Shen Group

Shen Programming Language

Konrad Hinsen 2023-08-28 18:54:08

I'd also love to see static checkers outside of languages, allowing multiple checkers and in particular domain-specific ones. Ideally, I'd build my own checker as my understanding of my problem domain evolves.

I somewhat suspect that the traditional tight integration of type systems and languages comes from the benefit that type checking brings to optimization.

Marcelle Rusu (they/them) 2023-08-28 19:11:47

Yea adding project specific rules was the original inspiration that got me thinking beyond conventional type systems.

Marcelle Rusu (they/them) 2023-08-28 19:15:32

linters can work, but its duplicating all the work of the original type system, and usually don't even do proper type checking... and they usually have their own build step etc.

What if my additional rules could be understood by a common LSP-like system?

Marcelle Rusu (they/them) 2023-08-28 19:55:45

I somewhat suspect that the traditional tight integration of type systems and languages comes from the benefit that type checking brings to optimization.

There's no reason why the database generated from the logic language can't stick around to be queried for optimizations, right?

One thing that I imagine will require some work is interpreting errors (no answers found) for the results of running these programs. I'm imagining when this logic program runs it'll have to keep track of which queries / predicates succeeded & which didn't - and something to make something like a stacktrace, maybe even track that in a logic db that can also be queried to emit domain specific (nice) error messages.

I think the big idea is having a database that contains really detailed statically analyzed facts which can be used to check correctness, do optimizations, or even help with automated refactoring tools. So it might not even need to be 1 logic language, but a shared database system which all the logic languages can speak & store facts & query from.

Konrad Hinsen 2023-08-29 05:22:08

There's no reason why the database generated from the logic language can't stick around to be queried for optimizations, right?

Technically, no. Socially, it means that the generation of important information is no longer under control of the compiler writer, and even language designer. They would have to cooperate with the users of their language and tool, and negotiate protocols for information exchange. It's more convenient to be a dictator.

William Taysom 2023-08-29 07:02:24

It's curious that value types are so favored among all the useful summary/analysis operations we can do on programming languages. I mean similar techniques can track references, side-effects, entity life-cycles, resource usage, even evaluation of the language itself.

Kartik Agaram 2023-08-30 04:01:36

One reason types are built inside languages is undecidability. We tend to want the type checker to always terminate, and to err on the side of soundness rather than completeness. This is why the type language is weaker, whereas the "main" language can be Turing-complete.

Konrad Hinsen 2023-08-30 09:02:52

I'd reverse the causality of that argument. It's because the type checker is part of the compiler that type checking has to be decidable and prioritize soundness over completeness (and lots of other nice properties we don't get, such as fitness for the application domain).

Marcelle Rusu (they/them) 2023-08-30 14:03:11

I don't know if I understand all the arguments against turing complete type systems. Could someone spell it out for me?

Is it hard that it'd be hard to debug infinite recursion in type systems? Do you worry people will misuse it to write fancy compile-time programs that will delay builds?

Marcelle Rusu (they/them) 2023-08-30 14:11:53

My second point seems to be possible in turing-incomplete languages too.

Marcelle Rusu (they/them) 2023-08-30 14:17:00

Something that I think would be unique to bringing in something like prolog is you could use its time travelling debugger to debug especially tricky types. I'd argue against tricky types/constraints in general, but when you're stuck with a library that goes wild with types you could at least dig in vs just getting a dump of useless text.

Konrad Hinsen 2023-08-30 18:54:42

@Marcelle Rusu (they/them) The main issue with a Turing-complete type system is that the compiler might get stuck in an endless loop.

William Taysom 2023-09-01 04:23:50

Turing-complete runtimes suffer from a similar problem. 😳

Kartik Agaram 2023-09-01 14:41:40

Turing completeness is indeed always a mixed blessing. But the problems matter more than the benefits at "meta" levels of scaffolding around a program. If you could get into an infinite loop before you even start running your program, the UX is much worse.

Kartik Agaram 2023-09-01 14:51:25

Hmm, going back to the original question it might work with some additional tooling. We typically have just a run command, but if the compiler supported running just the parser, or just up to the typechecker, maybe the UX becomes a non-issue. Yes, the programmer now needs to become aware of the internals of the compiler to some extent. But it might be a good trade-off.

Paul Tarvydas 2023-08-28 14:35:21

Martin Sandin 2023-08-28 14:48:07

I might be missing context here but the fact that type systems are logic languages should be uncontroversial given that that's what the Curry Howard Correspondence is about. Though I'm not very well read I think that using "complete" generic logic languages/solver (as a component) in solving type equations ought to be well trodden ground, my first thought went to github.com/rust-lang/chalk as one example. I'd be looking at what one might give up doing so as the reason why it doesn't happen more: speed? guaranteed termination? good error messages?

Marcelle Rusu (they/them) 2023-08-28 14:51:32

I do think something like prolog might not be fit for it as the error messages seem poor, but it feels like a solvable problem - something like a stack trace instead of the "no" that prolog can give.

Marcelle Rusu (they/them) 2023-08-28 14:52:50

chalk looks pretty cool, thanks.

Ivan Reese 2023-08-28 14:52:58

Quick reminder — please post replies as threaded replies. Thanks!

Paul Tarvydas 2023-08-28 14:59:36

My fault, yet again. Where’s Jef Raskin when you need him?

Jim Meyer 2023-08-28 16:45:19

A one minute interaction with a digital product at 60FP on an HD RGB screen flows through a creative/generative space of 125,241,246,351,360,000 pixel variations.

We need spaceships to explore this space, but we got rectangles with drag handles instead.

Jim Meyer 2023-08-28 17:41:38

Spaceships to explore pixels in products:

Scrubbing controls to cycle through:

  • Different localizations side-by-side

  • Data distributions that break layouts

  • Theme variations that cause issues

  • Screen sizes

These spaces are largely unexplored today only through manual means.

Eli Mellen 2023-08-28 16:57:24

A question for ya’ll doing thinking on future of coding things:

When/if you think about the accessibility (read here as “a11y”) of the future of coding do you consider accessibility as an attribute of the folks using your thing (e.g. a need) or an attribute of your design (e.g. a feature)?

Eli Mellen 2023-08-28 17:01:35

follow up question: does that distinction matter?

Samuel Timbó 2023-08-28 17:33:01

I think there's actually a trichotomy on that term that is worth exploring:

1) There's the industry standard, like "a11y", which I would say is basically always great to have correctly implemented on any production app; generally if the tool can be consistently navigated/manipulated keyboard-only and readable by assistive technology, it is a good sign the UI architecture is sound;

2) There's the interpretation you mentioned that would be designing with a particular user need in mind, like for instance, one can literally blindfold himself and optimize the tool for that particular usage context; that would quite surely make the tool "more accessible"; also, clearly great powers lie within optimizing for all human senses;

3) There's a third, quite literal, interpretation of "accessibility", as simply "having access at all to something". I personally have used that in the context of "live/visual programming can make technology more accessible", even simply because of ease of use, and mobile, etc. for instance.

Eli Mellen 2023-08-28 18:41:01

I like that break down a lot!

Sort of 3 facets of one thing, but all different. I’m imagining a radar chart mapping them, now!

Kartik Agaram 2023-08-29 02:04:01

This seems like one of those many places where a noun obscures where an adjective would clarify. Tools are for folks, folks have abilities. How accessible a tool is for someone depends on fit. Not very satisfying, but at least phrasing it this way is in principle answerable in a concrete case. But "this tool has accessibility"? Or "this tool is for accessibility people"? Neither seems like a meaningful statement.

Jack Rusher 2023-08-30 08:02:49

Kinda off to the side: I’d say that when making production stuff I take the kind of accessibility you’re talking about quite seriously, but when doing more research-y stuff I leave it aside during the initial phases because it’s so much harder to try new ideas while carrying the full weight of past standards.

Eli Mellen 2023-08-30 11:02:09

but when doing more research-y stuff I leave it aside during the initial phases because it’s so much harder to try new ideas while carrying the full weight of past standards.

Do you think this is always true? (showing my cards my whole thing and job revolves around accessibility and sort of trying to advocate for it to be a core part of the design/considerations for stuff...not just cream cheese applied at the end, or blueberries shoved into an already baked muffin.)

As example of how it could contribute to the design of a thing:

  • say you need some metric to track the impact of enhancing accessibility?
  • an easy(ish) way to track this impact is to track completions wherein you define some set of tasks that can be completed within a system and then track what % of folks start the thing vs finish it
  • leveraging accessibility-first design when approaching that situation empowers you to start thinking about "escape hatches," and ways to optimize for completion. Whereas if you just try to make it accessible at the end you'd remediate any keyboard traps you accidentally made, maybe with this approach you would ask "if someone were to get trapped at this step what would they do? what way out and through can we provide even if trapped?"
Jack Rusher 2023-08-30 11:13:16

How I mean this is that if I want to experiment with, say, AR goggles, I’m not going to start from the position of “how will blind people use this display device?” Or if I want to build a two-handed input device, I’m not going to start from “how would a one-handed person use this?”

Jack Rusher 2023-08-30 11:15:45

That said, there are many situations in more established input/output modalities (touchscreens, for example) where taking design cues from accessibility research improves the interface for everyone , which I think is broadly under-appreciated.

Eli Mellen 2023-08-30 11:19:47

Word. That makes sense. Again, sort of showing my bias, I think across tech “accessibility” is often framed as a matter of input methodology when it could be framed as a core design principle, akin to color theory. I’ve found folks are more readily able to accept this idea when you don’t frame it as acceasiibility, but instead approach it like ~service design~ .

Eli Mellen 2023-08-30 11:24:42

I am excited about seeing what apple does with AR/VR because I think desktop computing mostly ignore the human body (for both good and ill), phones and mobile devices started to force consideration of the body, and I ~think~ ar/vr could do some interesting stuff to ground computation into the body.

Eli Mellen 2023-08-30 11:25:11

That said…all the talking and demos and reading I’ve done has me thinking that’s still a ways off

Alex Cruise 2023-08-30 17:32:34

metadata-rich systems are well positioned for the kinds of introspection that make a11y easier…. e.g. this widget isn’t just a text box with a label, but a projection into UX space of a field object in a data dictionary somewhere. But metadata-richness usually leads to very poor legibility--every use case is an encoding in some lower-level language

Samuel Timbó 2023-08-31 18:48:02

I think we need a name for this more general design principle @Eli Mellen is trying to entail. I particularly like the tie to input (methods and contexts), because it makes it very tangible, and the emergent benefits can be often surprising (mostly thinking about programming interfaces here):

1) If my interface can be completely controllable/readable by voice, then any person (not only people with impaired vision for any reason) could use it in many arbitrary situations; I can program while cooking;

2) Similarly, if my system can be controlled completely by hand gestures, then I don't have to speak, or even touch the keyboard/mouse/touchpad; I can then build something very casually, like a game, using muscle memory alone;

3) Interesting research can be made on the ergonomics of interfaces considering arbitrary handicaps "Single Handed Programming", "Single Finger Coding", "Software Bending" (Programming using body gestures, which doesn't even exist yet), and their many design challenges' particularities;

4) The fact that one can program on the phone at all is a very sophisticated accessibility feat already, but (at least) that front gets talked about around as "Mobile First" design; still a very rare feat in current software ecosystem, especially for coding;

5) Repeat that for many devices (what about programming only with a joystick?) or body/mental state conditions (I have very little/high energy right now...) or environmental (would this tool be more helpful to code underwater?)... etc... etc...

These are a few examples of this unnamed phenomenon. I will try to compress it: "if you design with limitations in mind, the overall solution might improve".

What is the name?

Eli Mellen 2023-08-31 19:41:53

I’ve run into this kind of thing and seen it called stuff like

  • accessibility beyond compliance
  • inclusive design (adjacent to participatory design )
  • designing for resiliency

But I don’t think they really capture what you are doing a great job of describing, @Samuel Timbó.

I wonder about something like reclaiming Interaction Design, but with a broader scope of what interaction encompasses?

Steve Dekorte 2023-09-03 00:11:32

📝 sophia :chart: (8723/30000 days alive) on X

I feel like it’s not obvious to consumers how monumental and expensive regular software is. iOS cost about as much as the Manhattan project (~$20b). Google search cost about as much as the ISS (~$100b).

Personal Dynamic Media 2023-09-03 00:15:23

It seems like more evidence that the software crisis never actually ended. We kind of just gave up on solving it.

en.m.wikipedia.org/wiki/Software_crisis

📝 Software crisis

Software crisis is a term used in the early days of computing science for the difficulty of writing useful and efficient computer programs in the required time. The software crisis was due to the rapid increases in computer power and the complexity of the problems that could not be tackled. With the increase in the complexity of the software, many software problems arose because existing methods were inadequate.

Daniel Buckmaster 2023-09-03 01:57:36

I take issue with the term "regular" for iOS and Google! They might be extremely widespread, but they don't seem to me in any way typical...

Kartik Agaram 2023-09-03 03:21:17

Things naturally cost more if they bring in more revenue. If you help to gradually bring in $100M/yr in revenue, $20M/yr will almost naturally stick to your palms. But that doesn't necessarily compare with efficiencies from an existential war 80 years ago (did the calculation account for inflation?!). In my experience revenues are justification for reduced efficiency. And none of this accounts for the value to society, which is a fraction of the cost if you could somehow (naive, I know) stop trying to think up new ways to juice revenues in user hostile ways.

Of course, how do you build just the valuable parts without the hostile parts? You got me..

(This comment partly inspired by the sprawling thread spawned by wandering.shop/@cstross/110989030686287315)

📝 Charlie Stross (@cstross@wandering.shop)

@hywan@fosstodon.org @Keltounet@mastodon.social Question: "can the state of shells be improved enough to overcome the inertia of sticking to what you know?"

This is the wrong question. It presupposes zero cost of transition, while the cognitive workload of learning a new shell rises exponentially with age (hint: I'm nearly 60, shells are harder to adapt to than a new GUI). Stability and continuity are essential prerequisites to productivity!

Timothy Johnson 2023-09-03 04:01:54

The whole idea that a project the size of Google search can be assigned a specific cost misunderstands how large software projects work.

Any sufficiently large software project eventually reaches a tipping point where it's too complex to refactor without breaking it. And at that point the complexity and cost of the system spiral out of control.

I work on a distributed database product that's built on SQL Server, which reached that point many years ago. Only a handful of people with decades of experience dare to touch the lower levels of the codebase - most of us just glue new pieces on top as best we can.

But if a project is lucrative enough, companies will keep shoveling in resources to keep it going. And the SQL org brings in billions, so Microsoft is fine with paying thousands of engineers to work there.

At that scale, it doesn't really make sense to talk about the cost of a software project, since it's bounded only by the revenue the product brings in.

And that applies even to a project like SQL Server, which is built purely to provide a service and (as far as I know) doesn't include "hostile" features designed to extract additional revenue from users.

Timothy Johnson 2023-09-03 04:03:45

@Personal Dynamic Media Now that I've read the link, I suppose this spiraling of complexity is what you're referring to as the "software crisis"? Do you think that's a problem that actually can be solved?

Personal Dynamic Media 2023-09-03 16:34:42

@Timothy Johnson I think spiraling complexity is a main part of it, but I don't claim to fully understand it.

I believe it can be solved by using smaller teams and technologies that increase individual productivity instead of technologies that try to protect hundreds of programmers from stepping on each other, like Java. I can't prove it, though.

I'm very fond of Seymour Cray's response to Thomas Watson Jr. From en.m.wikipedia.org/wiki/CDC_6600

The 6600 was three times faster than the previous record-holder, the IBM 7030 Stretch; this alarmed IBM. Then-CEO Thomas Watson Jr. wrote a memo to his employees on August 28, 1963: "Last week, Control Data ... announced the 6600 system. I understand that in the laboratory developing the system there are only 34 people including the janitor. Of these, 14 are engineers and 4 are programmers ... Contrasting this modest effort with our vast development activities, I fail to understand why we have lost our industry leadership position by letting someone else offer the world's most powerful computer." Cray's reply was sardonic: "It seems like Mr. Watson has answered his own question."

📝 CDC 6600

The CDC 6600 was the flagship of the 6000 series of mainframe computer systems manufactured by Control Data Corporation. Generally considered to be the first successful supercomputer, it outperformed the industry's prior recordholder, the IBM 7030 Stretch, by a factor of three. With performance of up to three megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600.The first CDC 6600s were delivered in 1965 to Livermore and Los Alamos. They quickly became a must-have system in high-end scientific and mathematical computing, with systems being delivered to Courant Institute of Mathematical Sciences, CERN, the Lawrence Radiation Laboratory, and many others. At least 100 were delivered in total.A CDC 6600 is on display at the Computer History Museum in Mountain View, California. The only running CDC 6000 series machine has been restored by Living Computers: Museum + Labs.

Timothy Johnson 2023-09-03 18:18:04

@Personal Dynamic Media Yes, there are large coordination costs when too many people are involved in a project. And that applies on the political side at least as much as the technical side - I haven't ever seen an effective decision get made if more than three people are actively involved.

But large companies are also wary of trusting any single individual with too much responsibility. They have to be prepared for anyone to jump to a different company at any moment. So just like the software, the organization itself is a distributed system with a high level of redundancy.

Startups usually can't afford to have that redundancy, at least in the early stages. I think they just accept a high risk of failure instead. And most of them do fall, but the handful that succeed are more productive by orders of magnitude.

Personal Dynamic Media 2023-09-03 18:20:47

@Timothy Johnson I think the secret there is to take Peter Naur seriously and make sure that at least three people in your organization always have the theory of the program. Once you get down to two or, God forbid, one, you put another person on the team and you make that person do the next five projects, with oversight and advice from the person who has the theory.