Real talk – I've been vibe coding pretty heavily for the last 7-8 months, shipping a bunch of side projects and prototypes, but one thing kept slowing me down: refactoring.

What everyone does (including me until recently): AI spits out working code, you look at it and think "this is messy, duplicated logic, bad naming," so you start manually rewriting chunks, moving things around, fixing indentation... and half the time you introduce new bugs. Then you're back to debugging hell.

Turns out I was using the AI for the wrong part again.

The shift that fixed it: stop touching the code yourself for refactors. Instead, ask the AI to do the refactor with a clear, structured prompt.

Example from a real project – I had this messy React component with inline styles, repeated API calls, and state scattered everywhere.

Old way: I'd spend 30-45 minutes manually cleaning it up, breaking the loading state twice along the way.

New way – prompt:

text

Refactor this component following these rules:
- Extract repeated logic into custom hooks
- Move all API calls to a dedicated service file
- Use proper TypeScript types everywhere
- Improve component composition (smaller, reusable pieces)
- Keep the exact same functionality and UI output
- Add comments only where non-obvious

Here's the current code: [paste everything]

Output only the refactored files, with clear file names.

Result: It gave me four clean files (component, hooks, services, types), everything worked first try, and the code was legitimately better than what I would have written manually.

My new pattern:

  • Code works but feels messy → don't start editing
  • Copy the relevant files
  • Give the AI a specific refactor brief (performance, readability, separation of concerns, whatever)
  • Paste back the new version
  • Run tests/quick check
  • Commit and move on

Other prompts I've been using:

  • "Convert this class component to a functional one with hooks, keep behavior identical"
  • "Make this codebase follow SOLID principles, suggest file structure changes"
  • "Optimize this for performance – identify bottlenecks and fix them"

Results after a couple months: Refactor sessions went from 45+ minutes of frustration to under 10 minutes. Fewer self-introduced bugs. Cleaner repos overall.

The catch: Sometimes the AI over-engineers or changes behavior slightly, but that's rare if your instructions are tight ("do not change functionality" helps a lot). A quick diff usually catches it.

If you're still manually refactoring AI-generated code, you're doing the hard part yourself. Let the model handle the boilerplate cleanup – that's what it's actually good at.

Anyone else have a refactoring trick that's obvious in hindsight? What's your go-to prompt for cleaning up messy code?

  • The thing is, being able to recognize refactor potential requires some knowledge of programming principles, or in the case of React for example, the option to use Context instead of Prop Drilling.

    Some trust me bro tier broccoli zoomer just doesn't have the base knowledge. Same for Tammy from accounting who builds an AI slop beauty app.

    So my method of "file is too big for Claude to read in one go -> refactor" isn't standard? :)

    You brought up the context to illustrate your point, and it’s something I’ve been fighting against, so I feel I have to say something!

    I’m not saying we shouldn’t use the context at all, but we should question ourselves a lot before using it:

    • It’s risky because the compiler doesn’t protect you if you want to reuse a component but forget to inject the dependency.
    • It also adds a layer of indirection which means it takes more time ton understand where the data comes from.
    • Prop drilling can be addressed in different ways like with parameters objects, and it’s often better than a poorly used context.

    My view is that you should only use a context when there’s a very clear abstraction, like user, theme, feature flags,…

    Doc that support what I say (from the official react doc)

    — Context is very tempting to use! However, this also means it’s too easy to overuse it. Just because you need to pass some props several levels deep doesn’t mean you should put that information into context.

    Here’s a few alternatives you should consider before using context:

    Start by passing props. If your components are not trivial, it’s not unusual to pass a dozen props down through a dozen components. It may feel like a slog, but it makes it very clear which components use which data! The person maintaining your code will be glad you’ve made the data flow explicit with props. Extract components and pass JSX as children to them. If you pass some data through many layers of intermediate components that don’t use that data (and only pass it further down), this often means that you forgot to extract some components along the way. For example, maybe you pass data props like posts to visual components that don’t use them directly, like <Layout posts={posts} />. Instead, make Layouttake children as a prop, and render <Layout><Posts posts={posts} /></Layout>. This reduces the number of layers between the component specifying the data and the one that needs it. If neither of these approaches works well for you, consider context.

    https://react.dev/learn/passing-data-deeply-with-context

    [deleted]

    K

    Yeah, sometimes a simple "K" says a lot. But seriously, understanding the fundamentals makes a huge difference when using AI for refactoring.

    Good on you for sharing insights into the shortcomings of vibecoding. Why do you feel the need to belittle those who do it though? Do you resent them for wanting to build apps without doing the hard work of learning the craft?

    I resent the perceived expectation that people want to pay for their slop SaaS. I resent the posts from these people with obvious AI grammar, phrasing, etc.

    Bro tier broccoli zoomer might have a better idea for an app than you, and people might see value in it. If it works, who cares if it’s spaghetti under the hood?

    The people who's private information is written publicly to some bro coded Supabase table

  • These cavemen are still pasting code in their prompts.

    Yes, and everyone is doing what OP says they are (spoiler: they are not)

    This is just low-quality spam.

    A year ago copy-paste coding was respectable. Today, pasting snippets into chat instead of letting an agent read the entire project context is straight-up neolithic.

    Haha absolutely, I can’t imagine doing that, it seems so long ago…

  • Once I refactored my project to fully follow FSD the refactoring prompts became way shorter and there was less refactoring needed in the first place! All frontier models know what you mean by FSD; it is well documented.

    “please refactor xyz to fully adhere to FSD” - this prompt is saying a lot without saying a lot.

  • This is too simple of a prompt - you could just use ChatGPT or your choice of LLM to generate a refactor prompt based on your own code stack to do much better....

    That will really help to stop drift and then bad DB schematic changes throughout your codebase as you implement more features.

  • I'm going to say bad advice, I'll give better.

    Delete everything that looks fucked immediately. Then tell the agent to fix your build (and maybe give it a hint as to what you deleted and why, so it doesn't do it again).

    The problem is any duplicates, legacy code left etc, all poisons context. Always build up, avoid building down. This goes for comments as well.

    This guy just gives weird, bad AI-written advice formatted as per a template. He did the same thing yesterday.

  • Anyone else have a refactoring trick that's obvious in hindsight?

    use a real IntelliJ / JetBrains based IDE. Not even a trick, that's just, how you do refactors, in non-vibe-code world.

  • I ask it to propose how to build something, along with my guard rails (similar to your refactor request). I may go back and forth a few times to refine the AI proposal - and once it's good, bam.. it's better than I could do.

    I have shipped things that took an hour or two of prompting that otherwise would have taken me a couple of weeks to build manually.

  • Yeah don’t do that. Instead wait until a symptom of bad code appears: you say change this name, the name shows up in 10 spots but what would be in the same reusable component, and you see the ai make 10 updates. That’s a symptom of bad code. That should be one code update.

    Now you have a symptom and instead of manually writing code, prompt the ai. Ask what happened and why it’s built that way and ask if there’s a more strategic way to build for your purpose.

    Ai will agree (of course) and refactor for you. After the refactor tell it to capture what it learned in a document. This is the beginning of your project’s engineering principles doc. That you’ll add to a lot.

    When you notice bad code symptoms creeping in or you know you need ai to do something in the way that you corrected it in the past, tell it to reread the engineer principles before beginning.

    I have massive ai coded apps built this way. Trading apps. With encryption. It’s stable. It’s secure. It’s production ready.

  • I actually was able to get GPT5.2 to do a one-shot refactor at a 2000 line angular component lol. Broke it down into a reasonable folder hierarchy and split across like 10 different files including services. Was super impressive

  • I just tell it it’s code is trash. It usually understands

  • “Please make sure refactored/modularized codebase is 100% backward compatible”

  • thank you for this useful post, and thanks to the other contributors! i will definitely be trying some of your techniques!

  • Why do you keep making these posts that start with “Real talk” and then say “what everyone does” and then say some random thing that everyone doesn’t do?

    Is it a template you give your AI to make posts for karma??

    Because you made an identically formatted post like this yesterday, based on a similar logical fallacy.

    OP, what on earth is going on? Because this is just getting weird.