Some speculation by @Matt Webb about the not so far out future of AI agents and how we can/need to prepare for it.
In the other corner, some judgy comments about AI’s Looming Reputation Crisis (scroll down to the middle to find that bit).
I read both this morning and I know these are different use cases, however they beautifully cover the whole optimism/pessimism spectrum on AI.
Where do people here fall on that spectrum? Are there use cases that are obviously good/bad, or does that depend on… well… what? And are we going to outsource most of our lives soon to AI assistants while simultaneously drowning in mediocre generated bullsh*t trying to scam us?
Gosh, I miss the time of the early internet when I was excited about everything tech. Somehow I can’t find back into that mindset these days. Can someone convince me that the future is going to be universally great, like it used to be 20 years ago?
📝 Who will build new search engines for new personal AI agents?
Posted on Wednesday 20 Mar 2024. 2,867 words, 12 links. By Matt Webb.
Read to the end for a good TikTok
I was blown away by the sudden appearance and then rapid development of AI, having followed the field at a medium distance all my life. But, like cryptocurrencies, I quickly filed it into "interesting, follow quite closely, not my core thing"
I've generally been skeptical of AI/ML but have come around a little bit. I've been very impressed with some specific applications, eg image and text generation, pattern recognition, etc.
In applications that need rigor, I'm very skeptical and actually think an ML based approach is completely backwards. Specifically for things like coding, engineering, maths and so on, a foundation of statistical correlation based on pre-existing text is... completely wrong (and why this is not blindingly obvious to everyone remains a mystery to me). You want to start with a semantic (not statistical) model. In any case I do find it easier to ask chatgpt about how to use an api for a specific purpose, and then validate that it actually works. People say LLMs sometimes hallucinate but in reality they only hallucinate: is just so happens the hallucination sometimes matches reality.
Side notes:
I really don't like that the model makers dont reveal their training data. Release your training data you cowards!
AI/ML in decision making may help humans shrug away responsibility, which is also a major concern and I think we should stop talking about it as a magical entity that's different from a typical program. I think 'automation' is a great word, suggested by Emily Bender in this video: youtube.com/watch?v=eK0md9tQ1KY
People say LLMs sometimes hallucinate but in reality they only hallucinate: is just so happens the hallucination sometimes matches reality.
Well the same can be said of the human mind: the abstraction of our senses to our conscious experience is a hallucination that is tightly constrained by incoming sense input. When we dream (or, well, when we hallucinate), those constraints are absent.
fair point. I guess hallucination is the wrong word to use here because it implies a kind of perception.