You are viewing archived messages.
Go here to search the history.

Greg Bylenok 2023-06-14 19:18:59

This might fall under "thinking-together", but I'm throwing this here as it specifically relates to AI. For context, I've been working to integrate an LLM into an existing application. If "prompt engineering" is part of the future of coding, here are some challenges to expect:

    • Non determinism:
      • I can repeat a prompt and get back drastically different results, in both content and format.
  • * Capabilities:
    • I feel like I'm constantly probing to discover the capabilities and limits of the model. Every interaction is similar to being presented with a blank canvas: What should I ask? What can I ask? Is there something that I could ask that I am forgetting about? Can I reword my prompt slightly and (potentially) get better results? This leads to a lot of experimentation.
  • * Expectations *: We are tricked into believing the LLM comprehends what we are saying, when it's really just a giant prediction table. Then we are disappointed when it gives less-than-satisfactory replies.
Jarno Montonen 2023-06-15 06:18:10

I feel that one barrier to even starting to approach a coding task with LLM code generation is the uncertainty of how much effort it will be to end up with something satisfactory. If you're an experienced programmer you'll have a ballpark idea of the code you have to write and how much effort it will be beforehand. Also, you'll have a much better understanding of whether you're making progress or just heading towards a dead end. It's like choosing a familiar road you know gets you to your destination in an hour vs a road you have heard might take you there in 30 min, but just as likely you'll get lost and reach your destination in 2 hours.

Now, this would be different for someone who has very little experience with coding. If you know none of the roads, you'd probably choose the one you've heard is the shortest. Also, there will be people who have more experience with LLM code generation than manual coding. It'll be interesting to see how they see this. Will people who start programming with heavy usage of LLM code generation stick to that or move towards manual coding as they gain experience.

Christian Gill 2023-06-15 06:34:48

Will people who start programming with heavy usage of LLM code generation stick to that or move towards manual coding as they gain experience.

If LLM assisted coding is like training wheels, I wonder if they'd end up in a situation like "tutorial hell" (where beginners never break the loop of doing yet another tutorial) and never break the cycle of needed the training wheels.

With AI as a copilot the question is if you ever want to break that cycle or if you want to have it always by your side. Pretty much like assisted flying in commercial airplanes. Which, by the way, doesn't do all the flaying for the pilot, only the "boilerplate-ish" parts (for lacking a better word in my poor English)

Christian Gill 2023-06-15 06:35:26

I might be mistaken but, pilots do learn to fly without a assistance first

Christian Gill 2023-06-15 06:36:35

I think the danger is loosing the ability to think (experiment-0003.vercel.app/t/Uxkxi6Nuy7)

Christian Gill 2023-06-15 06:37:01

Which might happen to anybody, not only beginners

Greg Bylenok 2023-06-15 13:35:09

Thanks for your thoughts. Just to clarify things a bit, my original query wasn't strictly about code generation but more about generating text to be displayed to a human reader. We are treating the LLM as a component of an overall program. That said, OpenAI's recent announcement of support for "functions" might blur the lines here further. Regardless, the challenges apply here too.

Christian Gill 2023-06-15 14:19:23

Ohh yes, I got that from your message but then went off track when replying 🤦‍♂️

Christian Gill 2023-06-15 14:20:00

Dijkstra's paper is probably relevant here

Christian Gill 2023-06-15 14:21:44

Maybe the lack of structured output is what is preventing it from mainstream usage outside of chat bots

Jason Morris 2023-06-16 19:26:41

I have been using ChatGPT to learn the ProseMirror library, recently. I'm not a brand-new programmer, but I am brand-new to the library, and I have found it extremely helpful. I don't trust it, because it makes mistakes. But when it makes mistakes I can give it the error message and my hunch as to what is going wrong, and it tries again. That, I have found, is more efficient that me making the mistake, and me finding the solution, because ChatGPT's first guess is closer than mine would have been, and it is much better at getting it right on the second or third try. And it explains why it is changing things. Whereas I try things without knowing why they might work, and if they do work, I still don't know why.

Jason Morris 2023-06-16 19:27:44

The experience is like getting tutilage from a slightly senile expert. They may get confused, but they are still better equipped to navigate the uncertainty than you are alone.

William Taysom 2023-06-19 09:48:11

And soon (sometimes already) the first few steps can be automated. The system takes its proposed answer, tries running it, and fixes errors before replying.

Greg Bylenok 2023-06-14 19:19:13

Compare this to programming against a traditional API:

  • On in the input side, an API constrains the vocabulary. With an LLM, everything is fair game.
  • On the output side, I can guess (or learn) the effect of a given API call. With an LLM, it's all probabilistic.
Greg Bylenok 2023-06-14 19:19:49

Curious about others experiences here, ways to reason about these models, techniques for overcoming, etc...