You are viewing archived messages.
Go here to search the history.

Nilesh Trivedi 2025-07-28 03:51:55
Duncan Cragg 2025-07-28 18:35:10

As I said to Geoffrey on X about this article: even before LLMs, agents, copilots, buddies, I've personally always been an "Extend or Augment Reality"-philosophy person. Given the challenge to our community that LLMs bring, this will hopefully be an interesting thread, as this juxtaposition is quite polarising I think?

Duncan Cragg 2025-07-28 18:41:06

In the sense that one can see imperative programming languages as very much philosophically-aligned to agents: you "tell" the computer what to do and it slavishly follows instructions, as your programmable agent on-the-metal. The dual or inverse, declarative programming, is much more holistic and humane IMO, as you're dealing with live domain "stuff" directly - it augments your reality. AI agents are an extreme of imperative in the same way HUDs or AR are an extreme of declarative. Hence more extreme polarisation.

Nilesh Trivedi 2025-07-29 02:52:05

This is a fantastic analogy!

Tom Larkworthy 2025-08-03 13:19:37

Just came across dspy.ai while researching GEPA. Seems to be a very flexible and programmable "LLMs as code" runtime. Sort of a functional abstraction over LLMs. Its got some very good credentials using it, it allowed things like optimising the prompt.

📝 DSPy

The framework for programming—rather than prompting—language models.

📝 GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning

Large language models (LLMs) are increasingly adapted to downstream tasks via reinforcement learning (RL) methods like Group Relative Policy Optimization (GRPO), which often require thousands of rollouts to learn new tasks. We argue that the interpretable nature of language can often provide a much richer learning medium for LLMs, compared with policy gradients derived from sparse, scalar rewards. To test this, we introduce GEPA (Genetic-Pareto), a prompt optimizer that thoroughly incorporates natural language reflection to learn high-level rules from trial and error. Given any AI system containing one or more LLM prompts, GEPA samples system-level trajectories (e.g., reasoning, tool calls, and tool outputs) and reflects on them in natural language to diagnose problems, propose and test prompt updates, and combine complementary lessons from the Pareto frontier of its own attempts. As a result of GEPA's design, it can often turn even just a few rollouts into a large quality gain. Across four tasks, GEPA outperforms GRPO by 10% on average and by up to 20%, while using up to 35x fewer rollouts. GEPA also outperforms the leading prompt optimizer, MIPROv2, by over 10% across two LLMs, and demonstrates promising results as an inference-time search strategy for code optimization.