Unlocking New Superpowers in Web App Development with AI-Powered Workflow
Over the past few weeks, I’ve been experimenting with a unique AI-assisted coding workflow using Cloud Code and GitHub to build a new web app. This approach has truly unlocked new superpowers in how I develop software, blending traditional software engineering practices with AI capabilities. In this blog post, I’ll share an overview of this workflow, why it matters, and dive into the key phases and lessons learned during the process.
The AI Coding Workflow: A High-Level View
The workflow I follow is structured yet flexible, built around a core cycle of:
- Plan
- Create
- Test
- Deploy
Here’s a quick snapshot of how it works:
- I start by creating GitHub issues that define all the work I want done on the app.
- In Cloud Code, I use a custom slash command to process these issues. This command instructs the AI to:
- Break down large issues into small, atomic tasks using “scratch pads.”
- Write code based on the plan.
- Test the code by running the test suite and using Puppeteer to simulate browser interactions for UI changes.
- Commit changes to GitHub and open a pull request (PR).
- PRs are reviewed either by me or by Claude Code with a specialized review command.
- GitHub Actions run continuous integration (CI) to execute tests and linters automatically, ensuring that changes are safe to merge.
- After merging, I clear the AI’s context window (
/clearcommand) so the AI can start fresh on the next issue.
Why Follow a Structured Workflow?
This process mirrors the classic Software Development Life Cycle (SDLC): plan, create, test, and deploy—an approach that’s been foundational in software engineering for decades. While AI assistants like Cloud Code are powerful, writing code is only one part of building and shipping software. Managing complexity, maintaining quality, and coordinating releases still require structure.
The workflow is heavily inspired by GitHub Flow, a lightweight branching model ideal for small teams—perfect for a scenario where the “team” is one human and one AI assistant.
The Planning Phase: Creating and Refining GitHub Issues
The journey begins with clear, specific, and granular GitHub issues:
- I initially used dictation and AI (Claude) to draft a requirements document, then converted that into around 30–40 GitHub issues.
- Early on, I learned that not all issues were ready for coding; many needed refinement to be atomic and actionable.
- I spent significant time breaking down tasks, prioritizing, and ensuring each issue was tightly scoped.
- This phase felt like wearing a manager’s hat—writing detailed specs, reviewing generated code, leaving feedback, and sometimes discarding work to realign with goals.
- The better defined the issues, the smoother the AI’s coding process.
Setting Up the Foundation: Testing and Continuous Integration
Before rapid iterative development can begin, it’s crucial to establish a solid foundation:
- I prioritized setting up a test suite with Rails (leveraging its integrated testing framework) to ensure new code wouldn’t break existing functionality.
- Continuous integration with GitHub Actions was configured to automatically run tests and linters on every commit.
- I also set up Puppeteer via a local MCP server, allowing Claude Code to interact with the app’s UI in a real browser—testing buttons, forms, and user flows.
- Watching AI click around and test the UI was both surprising and satisfying, adding confidence that UI changes were functional.
Custom Slash Commands: Orchestrating AI Workflows
Cloud Code’s power comes from customizable slash commands—prompt templates that can accept arguments like issue numbers. My main slash command for processing issues is divided into four parts:
- Plan: Uses the GitHub CLI to pull issue details, searches prior work (scratch pads and PRs), and breaks down the issue into small tasks.
- Create: Writes code based on the plan.
- Test: Runs tests and UI checks.
- Deploy: Commits code and opens PRs.
I use “think harder” prompts to encourage deeper reasoning during planning, helping the AI produce better task breakdowns and solutions.
Creating, Testing, and Deploying Code: Trust but Verify
A common objection to AI-assisted coding is “You don’t know what the code does!” My approach addresses this by:
- Personally reviewing PRs to catch issues or stylistic improvements.
- Sometimes having Claude Code conduct PR reviews himself, using programming principles from mentors like Sandy Metz to suggest maintainability improvements.
- Relying heavily on automated tests and Puppeteer UI tests to catch regressions.
- Letting Claude do commits and PR creation to speed up iterations, although I recommend reviewing changes thoroughly.
For deployment, I use Render, which automatically deploys the main branch after merge—merging a PR effectively means deploying to production.
Managing the Human-AI Collaboration
In this workflow, the human’s role is strongest in planning and code review phases, while the AI handles most of the coding, testing, and deploying tasks. This balance allows me to scale my productivity while maintaining control over software quality and direction.
After merging a PR, I run /clear in Cloud Code to wipe the AI’s context, ensuring it starts fresh on the next issue without unnecessary context pollution. Each issue should be self-contained with all necessary information.
Additional Insights: Claude via GitHub Actions and Work Trees
-
Claude via GitHub Actions: While intriguing, using Claude directly in GitHub Actions currently incurs extra API costs. It’s best suited for small PR fixes rather than large feature development.
-
Work Trees: This GitHub feature allows running multiple parallel AI agent sessions on different branches simultaneously. However, in practice, I found it cumbersome—permissions had to be repeatedly approved, and managing multiple sessions led to extra overhead. For now, a single Cloud Code instance works best for my project.
Final Thoughts
This AI-assisted workflow isn’t about replacing human developers but about augmenting the development process with smart tooling that handles repetitive tasks, enforces testing, and speeds up iteration. By combining disciplined software engineering practices with the power of AI coding agents, I’ve found a workflow that feels like unlocking new superpowers in productivity.
If you’re curious to try AI-assisted coding with Cloud Code, start by:
- Writing clear, granular GitHub issues.
- Establishing a robust test suite and CI pipeline.
- Using Puppeteer for UI testing.
- Creating custom slash commands to orchestrate AI work.
- Balancing your involvement between planning, reviewing, and letting AI handle code creation and deployment.
For more tips and deeper insight, check out my video on Claude Code Pro Tips linked below.
Happy coding!
Further Resources
- Thomas Tacic’s post on AI-assisted coding (Highly recommended read)
- GitHub Flow Workflow
- Cloud Code Documentation
- Puppeteer Testing Framework
Did you find this helpful? Let me know your thoughts and experiences with AI coding workflows in the comments!