Idle Coding with Async Agents

The concept of "idle games" has always fascinated me—games that play themselves, requiring only occasional check-ins to make progress. They're a great way to pass the time, but I started to wonder: what if we could apply that same principle to something productive? What if we could have "idle coding"? The idea of being productive and building something real while going about my day, only checking in periodically, was too compelling to ignore.

I decided to test the limits of what's possible with modern LLMs and development workflows. My goal was ambitious: build an entire application from scratch, not by sitting down for long, focused coding sessions, but through asynchronous collaboration with an AI agent. For this experiment, I chose Google's Jules application, leveraging its support for async agents to manage and update a repository. The experience has been surprisingly successful, fundamentally changing how I think about development. It's allowed me to code from anywhere, often just using my phone.

It feels like a game. I spin up a few tasks for my agent, go watch an episode of a show on Netflix or run some errands, and then check back in after 30 minutes to see the progress, answer any questions, and kick off the next set of tasks. I’m being productive while having fun.

Of course, it's not magic. To make this "idle coding" workflow effective, I quickly learned that establishing a solid foundation and a clear set of rules is critical. It’s not about just firing off a prompt and hoping for the best; it’s about creating a highly structured environment where an AI agent can thrive. Here are some of the most important practices I discovered on my journey.

Build a Robust Framework for Your Agent

The key to success is minimizing friction and maximizing automation. My first step was to configure my GitHub repository to auto-merge pull requests as soon as they passed all checks. I used the "Publish a PR" feature in Jules, which meant that once I approved an agent's work, it was automatically submitted, tested, and merged into the main branch without any further intervention from me. To make this work, I also set up a GitHub workflow to automatically deploy the application to Cloudflare Pages every time a PR is merged.

This immediately highlighted the need for an extensive suite of tests. With code being merged automatically, I had to be confident that no regressions were being introduced. I tasked my agent with writing both unit and end-to-end tests for every new piece of functionality. This became a non-negotiable part of the process. If a regression did slip through, the first step after fixing it was to have the agent write a new test to ensure it never happened again.

To support this, I created a validate script in my package.json that runs everything: unit tests, e2e tests, type-checking, and linting. It's a single command that verifies the health of the entire application.

Guide, Don't Just Command

An AI agent, no matter how powerful, needs clear instructions. I found that maintaining a few key documents was essential for keeping the project on track.

  • An exhaustive agents.md file: This is the rulebook for the AI. It explains best practices, architectural patterns, and project-specific conventions. For example, I noticed the built-in frontend validation in Jules could be brittle, so I added a note to my agents.md telling the agent it was okay to skip that step if it failed, preventing unnecessary interruptions. You can find a great collection of agents.md files to use as a starting point here.
  • A PLAN.md or OVERVIEW.md: This file serves as the high-level roadmap, explaining the primary functionality and goals of the application. It gives the agent the "why" behind the "what."
  • A directory of reusable components: I'm using React and Tailwind. To give the agent a strong starting point, I copied the excellent, free components from HyperUI directly into my repository. I then instructed the agent in agents.md to always reference this directory to see if a suitable component already exists before building a new one. This saved an immense amount of time and enforced consistency.

Embrace Iteration and Imperfection

This process isn't perfect. Bugs happen. Regressions happen. Merge conflicts, while I try to avoid them by working on parallel tasks, are inevitable. The key is to iterate in small, manageable steps. In just a few days, my repository had over 200 pull requests. This rapid, iterative cycle means that any single bug or regression is usually small and easy to fix.

You also have to be an active participant. Get to know your file structure. Even though you didn't write most of the code, understanding the main components of your app is crucial for guiding the agent effectively. Designs and layouts can be particularly tricky, so having a basic familiarity with CSS and Tailwind helps you provide better instructions. Don't be afraid to jump in and make small tweaks by hand when necessary.

The entire workflow is surprisingly mobile-friendly. With Jules and the GitHub app on my phone, I can manage the whole process while I'm out. I haven't quite figured out resolving merge conflicts on mobile yet, but for most tasks, it works incredibly well.

The Result

This experiment in "idle coding" has been a resounding success. It has shifted my perspective on what it means to be a developer. It's no longer just about hours spent in front of a keyboard; it's about effectively managing and guiding an AI partner to achieve a goal.

I built a fully functional fitness tracking application that I intend to use myself, and I did it in the idle moments of my days. It's both a proof-of-concept of this development style and a final product I intend to use. You can check out my progress so far right here: https://gym-habit.adamkrasny.com/.

The way we build software is changing. It's becoming a task you don't need to dedicate all your focus to in order to get meaningful work done. Sporadic chats and thoughtful guidance can now build entire applications. Why not give it a try?