Paul Tarvydas 2025-02-10 21:32:37 Trolling for ideas on which way I should go. I have too many choices in front of me and way too many learning curve(s) to go down. I've got a drawware REPL that uses 3 windows: (1) draw.io, (2) browser that displays output of run, (3) python glue running in a terminal window that watches the timestamp on the draw.io file, and spawns a compile/run cycle when the drawing changes. The python glue creates 2 websockets (1) a 1-way conduit to the browser to send it JSON key/value objects (strings) and (2) a 1-way conduit from the spawned compiler which sends key/value messages which get forwarded to the browser. The effect is like printfs
but in a browser instead of a console (and more useful than printf
because it doesn't need to be sequentially inserted into the circuit). It knows how to shell-out to command-line commands. This is a VSH - a Visual SHell to replace /bin/bash. The whole mess works "fast enough" to act as a code development REPL. What's the best way to package the whole thing into a singly-deployable app? (single from users' perspective, maybe retain all processes and windows). Should I dump the browser and go with some local GUI package (what?). Should I be looking at redbean? Should I be looking at Glamorous Toolkit? CLOG? Keeping the browsers and sockets makes it scalable across distributed machines and might result in new ideas. I want to keep draw.io, since it saves me a lot of work (it's a PITA to use, but better than anything I could build myself). Keeping Python and JS lets me forego actual coding (I just ask AI to build the thingies - AI has been trained on zillions of lines of code in JS and Python). I'm good with Common Lisp, Python and JS (but, hand-written JS usually creates mysterious failures that are hard to debug. Lispworks debugger is the "best", next is Python). I'm good with cranking out little nano-DSLs using OhmJS (t2t), so I can generate code instead of writing tricky code.
This is VSH, using websockets instead of UNIX pipes. Using 2D node-and-arrow drawings instead of 1D text on the command line shell syntax.

Konrad Hinsen 2025-02-11 06:38:04 This is really a question about assembling software systems. The answer mostly depends on "for whom"? Which platforms, which level of user competence?
Here is what I would do, for the platform I use (Linux) and for users "like me", i.e. power users but not software professionals. I'd write Guix package for the full assembly (where "draw.io" is just a URL of course because you can't run it locally). Guix is made for creating software assemblies, and it can integrate absolutely anything. You can make a package that depends on Python and on a browser, no problem. You can then deploy the assembly as a single command, which you can then put into a shell script for convenience. Or into a .desktop file for launching from Gnome or whatever else.
Konrad Hinsen 2025-02-11 14:48:55 Then you can package it for Guix as well and have a self-sufficient software assembly for local use. Sounds good!
Oleksandr Kryvonos 2025-02-11 09:17:08 What if we use GPU / NPU / TPU to run Prolog several magnitudes faster?
Using technique of encoding words into numbers as LLM does?
Konrad Hinsen 2025-02-11 09:32:46 If I understand correctly what you have in mind, the result would no longer be Prolog.
Prolog is a formal language, like every other programming language. In a formal language, symbols mean nothing. They are just convenient labels for human readers, inside the formal system symbols are just equal or not equal. So if you want to replace symbols by numbers, you can just enumerate them by order of occurrence.
Word embeddings as used by LLMs are an interface between formal systems (the stuff running on the GPUs) and informal human language. Combining that with Prolog could lead to interesting results, but it's not faster Prolog.
Oleksandr Kryvonos 2025-02-11 09:36:18 yeah, I do not want the actual Prolog, but something similar to it to run 1000x faster
Oleksandr Kryvonos 2025-02-11 09:40:53 napkin math here,
LLM have 1billion parameters, they run at acceptable inference rate on Apple M1,
if the same compute power can be leveraged for logical inference even redundant
for example run 1 million branches of inference with 30% of them potentially beeing same for 1 second
and see where it gets you
Paul Tarvydas 2025-02-11 10:56:04 FWIW: From a hardware/implementation perspective, the main feature of Prolog is that it performs exhaustive search, and, gives the programmer a way to specify such searches in a declarative - less buggy - way. Loops within loops do this, also, but provide more opportunities for inserting bugs. Prolog does this by generalizing and using backtracking - a technique essentially frowned upon in the early days of computing (due to hardware limitations) (now possible once again). MiniKanren also does exhaustive search, but doesn't use backtracking, trading off memory usage instead. The canonical "assembler" for Prolog is WAM - the Warren Abstract Machine - which is used by GNU Prolog (iiuc, GNU Prolog implements Prolog in Prolog (it can be told to show the resulting WAM, which was useful to me when I was trying to write a WAM in Lisp)). A write-up of WAM principles can be found in Kaci's. Various lisp-based implementations are documented in PAIP and On Lisp and others. IMO, the most understandable implementation of Prolog is Nils Holm's Prolog Control in 6 Slides (the tx3.org website is 404'ing on me at this moment). The Holm version is written in Scheme. I found Holm's version so understandable that I even managed to hand-port it to Common Lisp and to mechanically port it to Javascript (the main thrust of this was to explore OhmJS, not particularly Prolog, but, it appears to work). I think that the way to speed up a Prolog program, is to remove all generalizations from a specific program, i.e. take a given (working) program and to pre-compile it into a bunch of nested loops written in assembler (and, for extra oomph, remove all need for context-switching). The product of any programming language is to create assembler code for use on a CPU. Some compilers do this by emitting only assembler, some do it by emitting assembler that leans on an engine. I think that Prolog fits into the 'engine' category. Many popular languages fall into the 'engine' category where the engine happens to be a lump of code that implements context-switching (often called "operating systems", which usually burn a lot of CPU cycles (something like 7,000-11,000 cycles per context switch, according to Claude 3.5)). Or to find ways to parallelize it (noting that LLMs operate on the principles of massive parallelization, but end up lying to you on occasion (i.e. LLMs in their current state, can't be trusted and they ain't Engineering)). Sequential programming techniques and languages, essentially oppose the existence of massive parallelization, requiring one to think hard to achieve it.
📝 Paradigms of Artificial Intelligence Programming
Paradigms of AI Programming is the first text to teach advanced Common Lisp techniques in the context of building major AI systems. By reconstructing authentic, complex AI programs using state-of-the-art Common Lisp, the book teaches students and professionals how to build and debug robust practical programs, while demonstrating superior programming style and important AI concepts. The author strongly emphasizes the practical performance issues involved in writing real working programs of significant size. Chapters on troubleshooting and efficiency are included, along with a discussion of the fundamentals of object-oriented programming and a description of the main CLOS functions. This volume is an excellent text for a course on AI programming, a useful supplement for general AI courses and an indispensable reference for the professional programmer.
Oleksandr Kryvonos 2025-02-11 12:02:22 Yes, I will try to achieve massive parallelization on GPU/ NPU,
It does not necessary mean that I will achieve this, but I will try.
Thanks for all the links.
Konrad Hinsen 2025-02-11 14:47:29
I do not want the actual Prolog, but something similar to it
From my perspective, formal and informal reasoning systems are fundamentally different. And that means that what you are envisaging is not similar to Prolog, even if its user interface looks similar.
Oleksandr Kryvonos 2025-02-11 14:55:53
From my perspective, formal and informal reasoning systems are fundamentally different. And that means that what you are envisaging is not similar to Prolog, even if its user interface looks similar.
thank you for clarification, as I told, this is more like napkin idea, we shall see if I can bring this to reality
Jason Morris 2025-02-11 19:36:16 If there was some way to use GPUs to do resolution faster, that would be great. I think people in the Prolog development community would be all over it, though, and I don't see any signs of that. I would check in with people like Jan Wielemaker on the SWI-Prolog forums and ask for his thoughts on where there is unexplored potential.
Jason Morris 2025-02-11 19:42:20 I've been working on a logical knowledge base search that uses unification but is based on subgraph isomorphism. It's designed to facilitate different kinds of searches from resolution, and is probably much slower except in those use cases. But it's an interesting space to play in, often because there is no one else trying things. 😅
Jack Rusher 2025-02-12 20:32:18 There’s been a fair bit of success in implementing high performance graph algorithms on GPUs. I don’t see why you couldn’t try to do some PROLOG-ish things that way…
Oleksandr Kryvonos 2025-02-12 21:42:53 📝 Higher Order Company
We're the HOC, a tech startup revolutionizing computing with massively parallel runtimes and processors using our HVM runtime and Bend programming language.