I'm calling this "Vibe Discovery" — distinct from vibe coding because I didn't know the requirements upfront. Started with "make something with the accelerometer" and discovered through 6 iterations that I wanted a WebGL marble game.
The interesting part was the dev setup: Claude Code running in Termux on a Redmi Note 9 (4GB RAM). The same-device feedback loop — code, test accelerometer, react, iterate — made rapid discovery possible in a way that laptop-to-phone deployment wouldn't.
Why can't we just call it "play". That is what we used to call doing things without a purpose.
I wish people would disclose when they used an LLM to write for them. This comes across as so clearly written by ChatGPT (I don't know if it is) that it seriously devalues any potential insights contained within. At least if the author was honest, I'd be able to judge their writing accordingly.
There was a very specific purpose here - to build a web-based accelerometer game. If I were to compare this with playing, I would say this is more akin to playing with a special kind of clay that shape-shifts itself based on your instructions.
As for the LLM-generated writing - I've updated the blog post with a 'meta' section explaining how LLMs generated the post itself. I've shared the link to the specific section as a response to other comments with the same criticism - I don't want to link to the blog again here and risk looking like a spam bot.
You posted the prompt to the game, care to post the prompt to the blog post? I don't care what an LLM thinks about how you built your game. I would like to know what you think, but I'm not going to try to salvage it from an LLM-generated blog post.
The blog post was written by Claude Code, reviewed by Gemini Pro, ChatGPT 5.2 Thinking, Kimi K2 Thinking, Deepseek Deep Thinking and me. Naturally, all the LLMs failed to judge that AI-generated writing is a turn-off for most readers. I failed to judge that too.
Quoting: “ What I’m describing is different. I’ll call it Vibe Discovery: you don’t know what you’re building. The requirements themselves are undefined. You’re not just discovering implementation - you’re discovering what the product should be.
The distinction matters:”
What is it with this pattern of phrases that screams LLM to me? Whenever I come upon this pattern I stop reading further.
Not only does it scream LLM output, I happen to find it almost always grating. It's fine enough when something is labeled as AI output, but when it's nominally a human-authored document it's maddening.
Claude tics appear to include the following:
- It's not just X, it's Y
- *The problem* / *The Solution*
- Think of it as a Z that Ws.
- Not X, not Y. Just Z.
- Bold the first sentence of each element of a list. If it's writing markdown, it does this constantly
- Unicode arrows → Claude
- Every subsection has a summary. Every document also has a summary. It's "what I'm going to tell you; What I'm telling you; What I just told you", in fractal form, adhered to very rigidly. Maybe it overindexed on Five Paragraph Essays
Oh no!! Yet another thing I've been doing for the past decade which will now make me look like a robot. I thought my penchant for em-dashes was bad enough.
I have a keyboard shortcut to make the arrows. I think they look nice.
oh I think they look nice too but unfortunately they are a Claude thing now :( though I think if you use them judiciously then it won't make the whole document look generated, it's really when it's deployed in the way Claude does that it's noticable
Right, which is why it's so strange to suddenly see every other readme and blog post that gets shared on this site speaking with the same tone of voice. Dead Internet theory finally came here.
I misjudged the amount of dislike HN users have for AI generated writing. I have added a "meta" section explaining how the post itself was written by AI, directed by my own taste. Here's the meta - https://www.kikkupico.com/posts/vibe-discovery/#the-meta
To be frank, I don't think AI-generated writing is inherently bad. Since there appears to be a strong bias against it, I will stick to writing blog posts by hand.
Let me do a quick "vibe comment" here: Is it really you who built the game, when it actually was built by an algorithm after you nudged it in some direction, you even did not know from the start? Please fix title to "I gave instructions to LLM and it generated code for a game".
Today on the front page there was an obviously vibe coded python script that pulls OSM data and slaps a colour scheme on it. Of course the data was skewed, because apparently LLMs don't do projections...
I gave up on the first non-ironic 'You are absolutely correct' comment... What is even real...
To be fair, vibe discovery is a lot more viable than vibe coding. Vibe coding implies the LLM output is acceptable. Vibe discovery implies a human in the loop, because LLMs can't "discover". They have no inate preference based on their lived experience in the same sense that a human or any biological organism does.
Exactly! LLMs' (or any Gen-AI) lack of lived-experience/emotions is their Achilles heel. The best human creators understand how to inspire emotions mainly because they can feel it themselves. Most other humans, despite innately understanding emotions, can't really create things that inspire emotions in others. So, Gen-AI as we know it today can't really reach a point where it deeply, personally understands and inspires emotions. Vibe discovery bridges this gap, I think.
I wish people would disclose when they used an LLM to write for them. This comes across as so clearly written by ChatGPT (I don't know if it is) that it seriously devalues any potential insights contained within. At least if the author was honest, I'd be able to judge their writing accordingly.
reply