Recently, a friend asked me about how I code and build products in the current state of AI. I've seen significant gains in the speed at which I can build things (2x–10x depending on the type of work) by leveraging AI to optimize my workflow. In this piece, I'll focus on practical insights. A caveat before diving deeper: as the field rapidly emerges, the optimal ways to leverage AI change quickly, often within a span of months. So the insights today may be outdated in a couple of months.
Firstly, let's take a look at the landscape and where things are headed. Today, we are in a state of AI-enhanced coding. This means that advanced Large Language Models (LLMs) help me build things, but my input, thinking, and ideas on how to implement remain important inputs. Even the most advanced systems still make mistakes today and fail when you work on things that are not well-represented in the training set, because LLMs are just a form of hyper-effective pattern matching. In the near future (a few years), there will be a tipping point. As systems become sufficiently integrated and advanced, the paradigm of how software is built will change fundamentally. Many startups are working on the idea of an independent "software engineer" agent that you feed instructions for entire features or products. Then, you allocate a ton of compute (inference) toward a given problem, and the model will come back with a solution after spending hours on it. This is a complex system to build, and no one has succeeded at it yet, but the dynamic seems to be drifting in this direction. For the sake of this article, I will focus more on the first type of AI-enhanced coding as it impacts my speed today.
Let's start with the code editor: I've changed my workflow to be entirely based on Cursor. It's a fork of VS Code, but re-envisioned with features optimized for AI-enhanced coding. It provides the best developer experience today if you aim to integrate LLMs into your workflow.
Based on this, here is a list of guidelines, insights, and tricks I use to get the most out of it. Although not complete, it provides a basic understanding of how to maximize speed:
-
Use the default tech stack that is most heavily represented on the internet (the training set of LLMs). For most web-based applications, this means React + Next.js + some Node.js backend. LLMs perform better when there's been a ton of training data, so going with the most popular choices allows you to get more out of the LLMs. It improves accuracy and therefore speed significantly.
-
Learn to become very specific with your language. If the instructions to an LLM are vague and unspecific, the results will be poor. If the instructions are clear, specific, and well thought-through, the results will be better. Once I got into AI-enhanced coding, I started to notice how I got lazy with the instructions, using poor directives to the model because my monkey brain optimizes to save energy. This way, you quickly reach a low local maximum with the benefits you get out of the models. You cannot really delegate the thinking about how to build something entirely to the model. A better approach is to think through how to build something and then write clear instructions. Then, your speed will drastically increase.
-
Use Composer mode in Cursor whenever possible. Instead of just using the chat interface to the right side of your file (which is useful for single-file edits), use Composer mode for more complex tasks. Whenever you need multi-file edits to implement a feature or change, you can load those files into context in Composer, and with the right set of instructions, it handles multi-file edits nicely. Sometimes, there are still issues and errors that appear, but if you take 5–10 iterations in Composer view, describing exactly what is wrong, you get to a good end result.
-
Add custom documentation to Cursor. When you use less-popular packages (like TipTap), the models tend to hallucinate in bad ways, which costs a lot of time. Luckily, you can add URLs of docs to Cursor, which then indexes and embeds that information so you can easily reference it in your prompts. This increases the quality of outputs for lesser-known packages.
-
Work in very small commits. When working with these models, there inevitably comes a point where the model is confused and starts breaking your code in bad ways, especially for more complicated and advanced changes. It can cost a lot of time to get the files back to a state where the thing you spent an hour on works again as you want it to. From my experience, the best approach is to work in very small commits so once the model gets something really wrong, you can just revert and start again from scratch. I often start again from scratch three or more times for complicated things until I find the right words to describe exactly what I want. Rolling back quickly is a must for that.
-
Dictate a great project structure. A unique edge humans still have over the models is having a better sense of the entire working context of a project. It matters a great deal to have a clean folder structure, typings, and good comments for all the code. The better you are at enforcing this and describing the desired structure to the model, the faster you will be. A good best practice is to be very descriptive and detailed about how you want the code structured to avoid costly refactors down the line.
-
Don't delegate thinking. I touched upon this earlier, but for most changes or features you are building, it's important to think it through in detail before using any model. You are still better at deciding how something should be implemented. The analogy is that you should still write the pseudocode in your project because it requires higher-level thinking. The models today are great at turning this highly fine-grained and detailed set of instructions into workable code.
-
Add detailed comments. Before every PR, I let the models write detailed comments on what each part is and why it is there. This makes it a lot easier for the model to handle future changes because it has more context on what is written in the code and how the different pieces interact.
-
Use V0 for fast UI prototyping. Instead of waiting for a design and then implementing it yourself, you can use V0 by Vercel to easily build good-looking and highly functional UI components. You can then copy and paste the code into your project and tweak it quickly in Cursor. This changes the design process entirely and allows developers who are not too UI-savvy to still build much better prototypes of new features.
-
Develop an intuition over time. As you spend more time building with AI and gaining experience, you start to develop an intuition about what the models are good at and how to use them in the best way. For most things, claude sonnet 3.5 is the best bet, but sometimes the new o-1 models are superior. Sometimes, you can get away with lazy instructions. Sometimes you need to think more deeply. Sometimes you can get complicated multi-file changes right in one iteration in Composer mode. Sometimes you need to take baby steps in chat to get clean code in the way you want it. There is no fixed rule when to use what and how, but there is a clear path to develop a strong intuition through experience.
The list above is a top-of-mind collection of things I pay attention to in order to optimize the speed at which I build things. This list will evolve over time, but I think it gives you a great starting point to speed up the things you build by over 5x. It is an exciting time to be a software engineer because you can build things at unprecedented speed, and the cost of building decreases rapidly. So you can do more of what you love: expressing yourself and your ideas through code.