Hacker News
- Hallucinations in code are the least dangerous form of LLM mistakes https://simonwillison.net/2025/Mar/2/hallucinations-in-code/ 312 comments
- Hallucinations in code are the least dangerous form of LLM mistakes https://simonwillison.net/2025/Mar/2/hallucinations-in-code/ 5 comments
Linking pages
- AI legibility, physical archives, and the future of research https://resobscura.substack.com/p/ai-legibility-archives-future-of-research 0 comments
- Will the future of software development run on vibes? - Ars Technica https://arstechnica.com/ai/2025/03/is-vibe-coding-with-ai-gnarly-or-reckless-maybe-some-of-both/ 0 comments
Related searches:
Search whole site: site:simonwillison.net
Search title: Hallucinations in code are the least dangerous form of LLM mistakes
See how to search.