Currently, vibe coding and being a vibe coder are an ad hoc and seat-of-the-pants foray. Whether it matures into something more formalized and studious is an open question. The betting line is that it won’t go through any stringent formalization. The assumption is that by-and-large, most vibe coding will be undertaken by a hands-off chunk of the world’s population and mainly be performed on an off-the-cuff basis. Maybe we will eventually end up with two classes of vibe coders, professional vibe coders versus amateur vibe coders, of which only a tiny proportion of vibe coders will be in the professional bucket. Generative AI normally takes as input a series of prompts by a user and then tries to answer questions or generate stories and responses based on what the user asked for. The underlying mechanisms involve immense pattern-matching based on vast arrays of human writing. An LLM is set up by doing explicit data training on human-written content that the AI patterns on. The result is an amazingly fluent-seeming AI that interacts akin to a human type of conversation. The same can be done for the writing of programming code. If a video coder happens to also be a proficient software builder, they likely can indeed look at the generated code to fix it. Ergo, that’s a circumstance of the vibe coder doing both the code generation via prompts and then doing the debugging on the generated code. The thing is that vibe coding is presumably supposed to be a widely adaptable approach to producing programs. A vital assumption is that end-users having near-zero programming knowledge will use AI to produce programs. The widest possible use of vibe coding will be by people who aren’t going to practically be able to tackle the code that has been generated. The future suggests that the AI will be improved such that the code is perfect at the get-go, or the AI is proficient at squaring away the code that has been generated.