You are viewing archived messages.
Go here to search the history.

Paul Tarvydas 2026-02-05 23:25:30

AI can now port code between programming languages with surprising accuracy. This capability raises an intriguing question: should we create new languages specifically designed for directing AI?

do we need a new programming language for the AI era?

Scott 2026-02-06 01:11:09

This is interesting! I remember seeing a bunch of people talking about this way back when GPT4 came out and then haven't really seen the idea pop up again since then

Scott 2026-02-06 01:12:20

📝 Jargon: an LLM-based pseudolanguage for prompt engineering

Warning and disclaimer:rotating_light: You are about to enter the realm of LLM pseudolanguages. Pseudolanguages are weird, experimental, and crazy. They don’t work very well yet, even on state-of-the-art LLMs. Use pseudolanguages at your own risk. Do not use them for anything with high stakes or in http://production.tl;drJargon|production.tl;drJargon is a natural language, informally specified, intelligently interpreted, referentially omnipotent, and flow control oriented LLM-based pseudolanguage for prompt engineering, currently runni...

📝 SudoLang: A Powerful Pseudocode Programming Language for LLMs

Pseudocode is a fantastic way to sketch programs using informal, natural language, without worrying about specific syntax. It’s like…

Tom Larkworthy 2026-02-06 07:21:51

Part of the power of LLMs is you can dump a ton of training corpus into it and get good results out of it, and that can be done economically because we already have a digitised corpus on the internet.

While a "from scratch" language might achieve theoretically a higher ceiling performance, unless you explain where the decades of training corpus, and discussions around it, come from, it will not out perform what already exists.

I think maybe (waves hands) you can generate that training corpus synthetically, but any suggestions of a new way to do LLMs has to be clear exactly how that is done, because any gaps in that training will lead to sub-optimal results.

William Taysom 2026-02-06 08:03:17

An interesting question is how to play to their strengths.

Paul Tarvydas 2026-02-06 13:21:11

I’m suggesting a different take on what LLMs might be used for. I don’t trust LLMs for Engineering code except when I give them small tasks to perform or go into a long loop iterating on minutiae.

LLMs understand English, but produce imprecise results, unless Engineers expand the prompts to contain lots of precision (“verbose”).

Traditional PLs (Python, Haskell, Javascript, et al) are dialects of English, culled down to the point where they allow concise expression with precise semantics producing repeatable results.

This begs the question of “what dialect(s) of English allow concise expression of prompts which produce /repeatable/ results with well-defined semantics?”, and, begs the secondary question of “would such dialects(s) improve the Engineering workflow?”.

(This isn’t an “either / or” issue. Maybe we can use LLMs to do more than one kind of thing. Consider this to be only one possible avenue out of several choices, all of which need to be explored and refined. I’m currently using OhmJS+rwr to concisely express new-fangled “compilers” that use existing PLs, like Python, JS, Odin, etc., as assembly languages. Maybe I should be using LLMs for this instead of OhmJS+rwr?).

Jason Morris 2026-02-06 17:21:10

I saw some research last year on getting LLMs to do knowledge formalization in controlled natural languages, as compared to typical programming languages. The results were disappointing. I suspect for the same reason that CNLs are hard to write and easy to read for people. Which might go to the question of strengths. For all I know, the LLMs are as good at writing assembly as they are at writing Python. The DX features of a language should be designed around who the D is, and that's changing.

Konrad Hinsen 2026-02-08 09:36:20

The D matters indeed, as does the overall objective. What do you want to improve compared to today's DX? Code quality? Transparency and understandability for humans? Productivity (then please be precise about the measure)? Something else?

Konrad Hinsen 2026-02-08 09:40:26

From a research rather than development perspective, the topic raises the question if there are useful intermediates between informal human languages and formal languages (programming, specification, data formats etc.). CNLs live in this space, but occupy a small niche.

Andreas S. 2026-02-08 21:26:31

I always wondered if LLMs could do now well on Emulators.

William Taysom 2026-02-09 07:52:36

Jason Morris Instead of writing controlled language for LLMs, I let them do it. I'm not interested in prompt engineering, so I'll give them a task with tests, usually they write the tests, and then its up to them to optimize. Example, working on a Cluade skill that calls a bunch of different models in some fairly complicated ways; anyhow, this is a diff from earlier today. Their change, not mine:

  101 -3. Present all responses: Claude's answer first, then each external model's response
  101 +3. Present all responses verbatim: Claude's answer first, then each external model's full unedited response. Do NOT summarize or

      +paraphrase — show exactly what each model wrote.

I'm still not good at this though. In as much as good imperative programming, functional programming, and logic programming feel very different from one another, I think good LLM-based programming is new beast.

William Taysom 2026-02-09 08:05:55

Like at the moment, instead of directly designing this skill, I give really open-ended preferences and then help them brainstorm until they settle on a metaphor that "feels" good. In this instance... commedia dell'arte. Then, if this works, which it sometimes doesn't, instead of having to rigorously enforce a protocol, they'll tend to improvise within the form.