Vibe coding: the fastest way to build something and the slowest way to maintain it
Vibe coding makes it easier than ever to turn ideas into working products using AI-powered tools. It’s fast, accessible, and incredibly effective for prototyping or testing concepts – no deep technical expertise required. But as projects grow, the cracks begin to show: debugging, scaling, and maintaining AI-generated code can quickly become complex and costly. So where does the real value of AI end – and where do the risks begin?
Introduction: what is this vibe coding thing?
One of the most common approaches in which an AI toolkit helps create applications and write code has been described by its founder, a technological mastermind, Andrej Karpathy, as vibe coding. Using this method, everyone can create code through prompting. AI recognizes the intention behind the given instructions and turns it into functional code.
Sounds simple? On the surface, that’s exactly what vibe coding should be. Instead of following a traditional code writing, one gets the ability to describe the project in a simple language that becomes immediately understood by an AI-powered tool like Claude, Cursor or corresponding operator. The model scans the context, organizes information into familiar patterns, and produces an output that has everything to be considered a completed, ready-to-launch product.
No wonder it has become a go-to thing for everyone wanting to test their business ideas quickly, without putting too much effort into making sure if the logistics behind the app can withstand real user-case scenarios, traffic peaks, and security vulnerabilities that often come to the spotlight at the least expected times.
What made people work this way
Coding has always been difficult. To code well, a programmer needs to spend years mastering ever-changing languages, frameworks, and all things that can go wrong when creating something of such a complex and unpredictable nature as code. And suddenly, a vast time spent on getting ahold of that experience seems like an option – one of the ways to go from the inception of an idea, through execution, to launch.
Take a real example. One of our devs’ recent tasks was to sort around two million media files on a server using the most optimal way to do so. Done manually, the task would probably have taken weeks. But with AI assistance, the developer resolved it on the same day.
In another case linked to metrics, another developer faced the challenge of adding 50,000 redirects to a website. A job that could easily take – as per his evaluation – up to 36 months of manual work if we account for the steady rate of a couple of minutes per link. Using an AI tool just for this task, the dev was able to complete it in under 30 hours, as opposed to three years – a timeline no business owner in the world would have accepted for their platform.
Of course, speed is not the only factor, though unquestionably tempting. The concept of vibe coding rests on the premise that intention is enough. It’s an AI task to validate the intention, and help break the technological barrier that prevents business owners and product managers from telling if a half-baked idea is worth following through on. Given how much can be taken off one’s shoulders, namely code refactoring, repository analysis, and testing, it is no surprise that the approach hit the industry almost overnight.
Moment of truth: understanding the generated code
This all looks like a dream come true, but the question should be raised at this point: how did it turn out so perfect? The fun and often unpleasant part starts when the prototype or a piece of application confronts reality. The first output can be fully functional, but there’s little likelihood that the code is entirely free of any bugs.
I myself received a project request recently where our client tasked us with fixing the code. It clearly had been created by a person with limited coding skills and with heavy use of an LLM. It was only on the first call that our developer identified one of the, as it turned out, string of issues – something a non-technical person is prone to experience if they start experimenting with prompts to put out a sudden fire.
The real problem with such a vaguely defined scope of work is that no one can tell what the problem actually is. Neither the client, nor the author of the prompt, and as it is only slowly being unravelled, not even the developer. Unfortunately, such an edge case forces the project owner to dig deep into their pocket and make an investment that will need to undo the damage done before any stakeholder, investors, users, or partners get to see it and find out that this is not something they signed up for, to put it mildly.
The good news is that an experienced team of developers can trace AI like a tailor finds a loose stitch. Knowing the desired logic behind the application and with a technical background, they understand what the code should do and while the front-end does not answer as to why the product lags at a specific user behavior on the checkout page, the backend presents a completely different story to those fluent in analyzing it, and those who know how to fix what the machine has broken manually.
Debugging: does AI fix bugs?
Fine, but if AI proves so helpful when writing the code from A to Z, why can’t we fully trust it when it starts searching for and fixing the bugs? It’s because AI often decides to work around issues rather than fixing them. If the model decides that it’s better to rewrite a given feature as opposed to diagnosing an error specifically, the risk of generating new problems instead of solving the old ones grows exponentially. In fact, 63% of developers report that they spent more time debugging AI-generated code than writing the code themselves.
A scenario of a similar sort happened to one of our Senior Developers. While porting a legacy PHP 5.2 codebase to PHP 8 with the help of an AI agent, it spent 2 hours trying to identify a remaining error, to no avail. The AI had taken an entirely wrong direction, looking for structural issues in loops and conditionals, while the actual problem lay in PHP opening tags, which caused a block of code to be read as plain text. What AI missed, a human eye caught within 15 minutes.
Knowing what we’re looking at is essential. And when the developer enters the phase of debugging, they go through system logs, set debug points and monitor application behaviour as it operates live. AI shouldn’t take ownership of it, unless we trust a process we can’t fully control.
Security, performance, and scaling: where vibe coding gets exposed
The same goes for security, performance and scaling. Over the transition from prototype to production, a vibe-code application faces the most serious test.
By default, the generated code may not follow security best practices. Every software agency, ambiscale included, has its own code of conduct when it comes to product security. This includes both sensitive data and authentication methods, and countless layers of protection that separate the application’s architecture from threat actors. A recent analysis puts it in perspective – among 1,645 applications built with the help of the popular vibe coding platform Lovable, 170 of them, more than 10%, had vulnerabilities that left users’ information open to anyone.
Performance is equally concerning because different circumstances can impact the application, well, differently. Take the traffic, or the overall architecture that needs to be planned ahead. Otherwise, the things that no one thought about are usually the first to break. And at scale, the risk of an app’s failure becomes more of a when than a sheer possibility
The scale itself is, paradoxically, a matter of the system at its foundation. Hence, at Ambiscale, we strive to learn the direction of the app’s evolution at the very early stage of the design. If this isn’t taken care of, the future extensions may require redeveloping from scratch.
The big problem: software maintenance
While the cost of development through vibe coding is relatively easy to assess, it is the maintenance that escapes the frame of prompting.
Technology constantly evolves. What works today can become a problem once the company enters a new market or signs a contract for a new type of software. When this happens, the tech team needs to update the website, platform, or any type of application that stands behind the technical side of the business. It is fortunate for the company if there is someone who understands each system deeply enough to make changes without breaking everything around it.
Otherwise, the system – which in reality came to life as a result of vibes – can be extremely resistant to change. Not to mention all sorts of company acquisitions that often require a swift transition of technology across different subsidiaries in line with a unified technical blueprint.
One type of project that illustrates the problem is migration. Moving an entity from one environment, language or framework to another is a multi-faceted task that requires knowledge of how things were built in the first place. Datasets like to differ. They are stored in different places and formats. When the sole record of the application rests in LLM memory, the migration project becomes impossible to land successfully. You can apply AI as a tool, but with no human supervision, you end up with a product that never really leaves where it started.
The organizational risk no one talks about
Looking at vibe coding from the perspective of a CTO or project leader, it’s critical to keep high standards and maintain full transparency as to when and how AI is used for coding within a team.
Imagine a tool that the team is delivering for one of the company’s partners. When built by a single person using AI, there is room for missing some of the key components, like documentation, a proper code review, and a clear track of every decision made during development, logged in a dedicated ticket system.
Now, if the tool’s stability rests on a collection of prompts, the technical debt starts to accumulate. Especially when the developer assigned to the project leaves the company, and the entire app’s logic needs to be reverse-engineered every time someone touches the code and expects changes to take place in line with roadmaps and charters. Sure, you can take a step back. Process the code through one more prompt. But before you do so, let’s make sure that the corresponding time buffer has been applied to the project during or before the discovery phase.
And here comes a question of how much awareness you build around the tech stack used in the process. Using AI, and not being able to defend it in the eyes of the client puts the entire organization at risk, because what is supposed to be a straightforward task quickly escalates into a matter of trust and transparency.
How to use vibe coding without paying for it later
It isn’t the author’s intention to put vibe coding in a bad light. One thing, though, that I need to emphasize, particularly after discussing the method with developers, is that AI is a tool. As with every tool, you pick it up to help yourself with the work. It can suggest, analyze, and assist, but should never do the entire work for you. And by that, it is you who takes responsibility and confirms whether the AI output passed the human test. The more you know about a given topic, the more accurate and responsive the tool will be.
Great for prototypes, small features, and idea exploration, vibe coding works best for giving a project proper momentum, finding out if the output makes business sense, and telling whether it is worth the investment, both financially and in time. Used well, it is – and I am fine using a cliché – a game changer for developers and entire teams responsible for delivery.
But when it comes to complex projects, ones that rely on intricate architecture, development phases, a fixed security framework, and expected performance metrics, the prompted output may not be enough. You will need people who understand technology, know how to scale, and can predict the way forward, with the ability to trace every step and account for every decision made since the application’s outset. That’s where the real development starts, and that’s what we, at Ambiscale, do: helping teams go from experiment to an application that can be developed and grown for years to come.