You are viewing archived messages.
Go here to search the history.

Lukas Süss 2023-10-16 11:48:58

Hey everyone, – I'm looking for some quite a few years old demo that I can vaguely remember that

was using pointfree code in a table layout (related to stack based programming).

Some cells merged and changing on evaluation IIRC.

I initially thought that it was from Jonathan Edwards, but not so sure anymore.

I was not particularly convinced about the idea but some discussion

about stack based programming came up where this would be a nice reference I guess.

Does anyone here still have a link to that maybe?

Lukas Süss 2023-10-16 17:24:18

Thanks for the reply. Though it's not the one I'm looking for. I think it was a light gray table maybe just a mockup on a website. Pointfree composed functions in adjacent the colums of a table and not a dedicated stack location but maybe freely choosable evaluation order. Not sure anymore.

Lukas Süss 2023-10-19 12:26:21

Joshua Horowitz – yes exactly that one it was – thanks a lot :)

Lukas Süss 2023-10-19 12:33:42

I think I've seen an earlier prototype but this is clearly a successor of that.

Lukas Süss 2023-10-16 12:41:56

On an other note:

– – –

Here's an interesting new paper on advanced structural editing with a strong "typing normally" focus:

Gradual Structure Editing with Obligations

tylr.fun/vlhcc23.pdf

building on tylr.fun

by @David Moon @andrew blinn @Cyrus Omar

– – –

And some work on seemingly lower level perhaps more near term practically applicable projectional editing:

Projectional Editors for JSON-Based DSLs

arxiv.org/abs/2307.11260

prong-editor.netlify.app

by @Andrew McNutt @Ravi Chugh

Mike Austin 2023-10-19 16:35:02

Thoughts on AI code assistants? elegantthemes.com/blog/wordpress/best-ai-coding-assistan

It doesn't seem that far off from easily prototyping things like games. An AI could understand what a screen is, a sprite/character, movement, etc. Heck, just tell it to generate a random game and fine tune it.

Eli Mellen 2023-10-19 16:38:27

Not a solid thought, but a link to interesting research that just dropped.

Pluralsight shared some interesting research they just completed. The research seeks to validate a framework that can be used to understand developers’ relationship to AI.

Quoting from the data highlights of the landing page:

  • 43-45% of developers studied showed evidence of worry, anxiety and fear about whether they could succeed in this era of rapid generative-AI adoption with their current technical skill sets.
  • Learning culture and belonging on software teams predicted a decrease in AI Skill Threat & an increase in both individual developer productivity and overall team effectiveness.
  • 74% of software developers are planning to upskill in AI-assisted coding. However, there are important emerging equity gaps, with female developers and LGBTQ+ developers reporting significantly lower intent to upskill. On the other hand, Racially Minoritized developers reported significantly higher intentions to upskill.
  • 56% of Racially Minoritized developers reported a negative perception of AI Quality, compared with 28% of all developers.
Lukas Süss 2023-10-19 16:44:39

Seems @andrew blinn gave a talk on AI as programming assistant recently.

Hazel system being the context.

andrewblinn.com/papers/2023-MWPLS-Type-directed-Prompt-Construction-for-LLM-powered-Programming-Assistants.pdf

Not aware of a recording ATM.

Mike Austin 2023-10-19 16:44:54

I confuse AI with genetic algorithms sometimes, and I think my example was more GA unintentionally.

Lukas Süss 2023-10-19 16:57:35

I think that AI really really needs some memory beyond the current naive context window approach. Some non-AI smart (back)traceing of dependencies that helps filling that memory with what actually matters. – ATM I have not the first clue about vector databases beyond the name, if it's a good or bad idea.

Wouldn't one want model-weight-modifiers?

Or are vector databases just that?

– – –

It's merely obvious to me that storing stuff in a re-read context window is (while most simple also) the most stupid approach possible.

– – –

Locally running open source AI would also be nice not running into exploding fees (for just one thing).

andrew blinn 2023-10-19 20:44:19

no recording unfortunately. paper eventually though. in brief I think llms have huge potential here but also tons of failure cases, hence trying to deeply integrate them with some kind of structured semantic analysis. at the least retrieving similar code via embeddings feels both like under and overkill given that we could use types and variable binding instead to precisely source relevant code. there are certainly interesting hybrid options here, like if there's no code available having the relevant types, using a vector database of type embeddings to retrieve 'similar' types, and then proceed semantically. this all relies on being in a language withe nice, expressive types though, and being in a codebase that doesn't (say) just use strings for everything. in the broader sense of assistance, beyond simple code completion, we're looking into combining latozas work on programming strategies with type driven edit calculus based approaches to create scaffolded multi-stage processes to perform non-trivial multi-location codebase edits (2nd last slide has a very primitive mockup of what this might look like)

Ivan Reese 2023-10-20 01:06:41

Mary Rose Cook (friend of the show) is working on something exactly related to this, in the context of games. I don't think it's public yet, so I can't share specifics, but keep an eye on her twitter.

In short, yes, a lot of the "I think AI needs _" or "AI could do , but " ideas in this thread are things she's engaging with in this work.

Lukas Süss 2023-10-19 16:39:15

Concatenative Programming Projection

2022 – Interleaved 2D Notation for Concatenative Programming – by @Michael Homer

Some quickly enlightening demo videos here:

michael.homer.nz/Publications/LIVE2022/article <<< THIS IS GOOD

Landing page: michael.homer.nz/Publications/PAINT2022

Tryable in the browser: homepages.ecs.vuw.ac.nz/~mwh/demos/p22-2d-concat

Direct link to the paper: mwh.nz/pdf/paint2022

It's a concatenative language.

It's not a stack based language. Slight differences I guess.

Also questionably point free with with the values unnamed but live values still shown.

Comparison to stack based languages

Similarities to a stack based language:

– also no variable names

– also the typical operations: dup, swap, dig

– also only variables with an arity of one or more (aka functions) are shown (and literals)

Differences:

– There's no reverse polish notation.

– Unlike in in a stack based language there's no single stack.

Rather the representation make opportunities for parallel evaluation quite obvious.

– Variables have no names but are still displaying their live value like in a spreadsheet (not referring to the layout).

It seems this is actually not a language but a projections style.

This is nice, meaning to some degree other language can potentially be projected into this representation.

Well, plus-minus some issues. Syntactic sugar mostly not carrying over. Readability in other projections.

Switching to a different projection one may want to give out human readable variable names though rather than assigning auto-generated ones.

Maybe this would be a viable additional projection target for some other languages (unison)?

No clue though how this would interact with algebraic effects.

Comparison to ALDs

( apm.bplaced.net/w/index.php?title=Annotated_lambda_diagram )

Relation of Michael Homers model to annotated lambda diagrams:

★ ALDs give same obviousness of parallelity opportunities

★ Unclear which representation is visually denser but likely not the ALDs

★ ALDs (as in the current mockups) are not pointfree. Variable names do appear both …

– at the head of the function definition (and tops of let- and in- blocks) and

– as annotations of the horizontal value lines (i.e. as the arguments to functions)

★ For Making the ALD code projection more aligned with Michael Homers model:

– replace value names with live values

– collapse let-in-blocks by substitution

(live values could still be added as "syngraphic" sugar in extended value lines, I digress …)

★ flip & dig (argument permutators)

– They both vanish in the ALD code projection no matter their location.

– They are just permuting the arguments by swapping application lines and all preceding dependents.

★ dup

– only vanishes at the top level as it turns into two forks.

– otherwise less trivial as it induces an unavoidable let-in block.

(Uncurrying into tuples is a bad alternative as it hides away ALD circuitry, the whole point of ALDs)

Tom Lieber 2023-10-21 22:11:24

I know it's ancient tech, but there are so few parts of my computer where I can freely mix styled text, code, and data, and Mathematica is the only one where doing so feels remotely natural.

I was inspired to post something by recent workflow breakthroughs in how I keep my lab notebook, where I slice-and-dice tabular data from experiment results in-between my paragraphs of stream-of-consciousness analysis. It's a lovely way to work. My journal entries build on one another in ways they never did before, because I can use code to put the best view(s) of the data right into the notebook.

image.png

Konrad Hinsen 2023-10-22 07:58:58

Ancient tech, but also the only one ever designed for this kind of use case without compromises. All of today's Open Source clones of Mathematica notebooks started with the constraint of building on pre-existing technology that was designed for different use cases. Perhaps even worse, they suffered from a rush to widespread adoption before battle-testing the design.

I have often said in conferences that Jupyter will be tomorrow's legacy format that everyone will wish would go away. Fortunately nobody carries rotten tomatoes to such events, so I only get verbal abuse.

Tom Lieber 2023-10-22 14:15:18

I was reflecting on why it became convention on my team to save notebooks in source control ~with cell output stripped~ , since, coming from Mathematica, it’s ludicrous to open a notebook and not see the half of the content that contains the results.

I realized that a big part of it was that legacy tech problem. Outputs can’t be used as inputs because the frontend is a totally different stack, so they’re not nearly as useful. And they’re bloated because there’s no concise way to specify the graphics commands to render a plot—they’re giant blobs of HTML and CSS and JavaScript, which probably ~will~ be different every time you rerun the notebook.

Srini K 2023-10-23 01:01:54

I use Jupyter notebooks a decent amount and they definitely aren’t perfect. But as a data person, I sadly don’t have much of a choice. IDE’s are even worse.

There are some good SaaS products that offer richer, live-collaborate notebooks which I’ve enjoyed using. They aren’t open source so that’s the tradeoff.

Gone are the days of open source software being built for specific crafts-people (like scientists or data scientists) that have good design. Sadly nowadays, open source to me usually means “not designed tightly with users’ workflows in mind”

Konrad Hinsen 2023-10-23 07:58:13

All that is why I advocate for convivial software in computational science. If we want tools that fit our needs, we need to be able to tweak them to our needs. Today's "Open Source with insufficient funding" approach to scientific software means that we have to live with what we can hack together based on ingredients made by others for other purposes. Jupyter is a good example.

Unfortunately, most of my colleagues don't even believe this to be possible.