Greg Baugues thumbnail

📝 Greg Baugues Blog

Bringing Claude Code to Life with Custom Sounds: A Fun Introduction to Hooks

When Enthropic announced hooks for Claude Code, it opened a whole new world of possibilities for customizing how Claude interacts with your workflow. Inspired by a playful comment on the Claude AI subreddit—“I’m going to use this to make Claude meow. I’ve always wanted a cat, but I’m allergic”—I decided to dive in and create a unique sound-based notification system for Claude’s various events. The result? Claude meowing, beeping, and even speaking as it runs commands in the background.

Why Use Hooks in Claude Code?

Hooks are a powerful addition that let you run custom commands triggered by different Claude events. These events include:

  • Pre-tool use: Before Claude uses a tool like bash, file editing, or fetching data from the web.
  • Post-tool use: After Claude completes a task.
  • Notification: When Claude is waiting for your approval.
  • Stop: When Claude finishes its task.
  • Pre-compact: Before autocompacting your session history.

By assigning sounds or commands to these events, you get real-time feedback about what Claude is doing, making it easier to understand and control its behavior.

Setting Up Hooks: Where to Start

To set up hooks, you edit your settings.json file within your project. This ensures your hooks are version-controlled and consistent across different work trees. Although you can also configure hooks in user-level settings, keeping them project-specific is a good starting point.

Here’s the gist of the setup:

  • Create a hooks directory inside your .cloud folder.
  • Define a list of hooks in your settings.json, specifying the event type and the command to run.
  • Use a single Python script (hook_handler.py) to handle all hook events. This centralizes your logic and simplifies debugging.

Example snippet from settings.json:

json "hooks": [ { "event": "notification", "command": ["python3", "/path/to/hook_handler.py"] }, { "event": "stop", "command": ["python3", "/path/to/hook_handler.py"] }, { "event": "pre-tool-use", "command": ["python3", "/path/to/hook_handler.py"] } ]

The Magic Behind the Scenes: The Python Hook Handler

The hook_handler.py script receives JSON data from Claude via standard input whenever an event is triggered. This data includes session info, event name, tool name, and tool input, which you can log and analyze to determine what action to take.

I used Claude itself to generate and iteratively refine the Python script, making it as readable and maintainable as possible. The script’s responsibilities include:

  • Logging incoming event data for debugging.
  • Mapping specific events and commands to corresponding sounds or voice notifications.
  • Playing sounds using predefined audio files stored in categorized directories (beeps, voices, meows).

Curating the Sounds: From Beeps to Meows

To keep things fun and engaging, I sourced sounds from Epidemic Sound, a platform I often use for my YouTube videos. They offer a wide range of sound effects, including beeps, cat meows, and even voice actors providing customizable voice clips.

Some tips for working with Epidemic Sound:

  • Sounds often include multiple effects in one file, so use their segment tool to extract just the part you want.
  • You can assign specific sounds to particular actions, for example:
  • A "committing" voice clip when Claude commits code.
  • Different beeps for bash commands, file edits, or pull request creation.
  • Sad meows or playful cat sounds for other events.

This mapping helped me instantly recognize what Claude was doing just by listening.

Lessons Learned and Practical Uses for Hooks

Building this sound notification system was more than just a fun experiment; it was a fantastic way to understand Claude’s inner workings and the power of hooks. Here are some insights and practical applications:

  • Understanding Cloud Code behavior: Assigning sounds to events revealed how often Claude updates to-dos or runs bash commands.
  • Custom safeguards: You can create hooks that prevent dangerous commands like rm -rf from executing accidentally.
  • Automation enforcement: Hooks can ensure tests run before opening pull requests or run linters automatically.
  • Better notifications: Replace or supplement default notifications with customized alerts that better fit your workflow.

Getting Started Yourself

If you want to explore this yourself, check out the full code repository at hih high.ai/hooks. The Python script and example sounds are all there for you to experiment with and customize.

I’d love to hear how you’re using hooks in Claude Code—whether for fun like me or to build more pragmatic workflows. Hooks unlock a new layer of control and creativity, and I hope this post inspires you to dive in!


Summary

  • Hooks allow running custom commands on Claude Code events.
  • Setting hooks in settings.json keeps them project-specific and version-controlled.
  • A Python script can handle multiple hook events for better maintainability.
  • Assigning sounds to events helps understand Claude’s behaviors.
  • Hooks can be used for both fun notifications and practical workflow controls.
  • Check out the full project and share your hook ideas!

Happy coding—and meowing—with Claude Code! đŸ±đŸŽ¶

đŸ“č Video Information:

Title: I used HOOKS to make CLAUDE CODE Meow, Beep, and Talk
Duration: 14:28

How Custom Sounds Help You Understand Claude Code Hooks (and Why You Should Try It!)

When Anthropic announced hooks for Claude Code, one Reddit user joked, “I’m going to use this to make Claude meow. I’ve always wanted a cat, but I’m allergic.” That comment sparked an unexpectedly powerful way to get started with hooks: adding custom sounds to different Claude events. Not only is it fun and whimsical, but it’s also an educational deep-dive into how Claude Code operates under the hood.

Let’s walk through how this sound-based project works, why it’s useful, and how it can inspire more advanced, pragmatic uses of hooks.


Understanding the Power of Claude Code Hooks

Claude Code hooks let you assign custom commands to different events in the coding workflow. These events include things like:

  • Pre-tool use: Before Claude uses a tool (often Bash commands, editing files, reading files, web requests, etc.)
  • Post-tool use: After Claude finishes using a tool.
  • Notifications: When Claude is waiting for user input or approval.
  • Stop: When the current task is completed.
  • Pre-compact: Before Claude auto-compacts session history.

By assigning sounds to each event, you get immediate, sensory feedback as these events are triggered. This helps you see (or hear!) what’s happening in the background, making hooks less abstract and more tangible.


Setting Up Your Project: Where to Configure Hooks

To get started, you’ll edit your settings.json—specifically, the one in your project directory (not your user or local settings). This ensures your hook configuration is committed to your repository and applies across all work trees.

Within your project, create a hooks directory to store all the scripts and sound files. If you eventually want these hooks to work across all projects, you can migrate them to your user settings, but localizing them per project is best for experimentation.


Defining Hooks in settings.json

In your settings.json, hooks are defined as a list, where each hook specifies:

  • Type of event (e.g., pre-tool use, stop, notification)
  • Command to run (in this case, a Python script)

For simplicity and maintainability, it’s best to keep the JSON configuration minimal and put most of the logic inside your Python script. This allows for easier debugging and flexibility.


Building the Python Hook Handler

The Python script acts as the core logic center. Here’s how to approach it:

  1. Log Incoming Data: Whenever Claude triggers a hook, JSON data is piped into your script via standard input. This data contains session information, the event name, the tool being used, and any tool-specific input. Logging this is crucial for understanding what’s happening and for debugging.

  2. Map Events to Sounds: Create directories for different types of sounds (beeps, meows, voices, etc.). You can use sound effect services like Epidemic Sound to download fun beeps, cat meows, or even AI-generated voice snippets.

  3. Assign Sounds to Actions: Either assign random sounds or, more effectively, map specific sounds to specific events or Bash commands. For example, use a “meow” for file edits, a “beep” for notifications, or a British-accented “committing” for git actions.

  4. Optional Patterns: Fine-tune the sound mapping for more granular feedback. For example, distinguish between editing files, reading files, or running specific CLI commands by matching against the command name.


Why Start with Sounds?

Assigning sounds to hooks isn’t just playful—it’s surprisingly educational. You’ll quickly discover:

  • Which events are triggered most often (e.g., how many actions are actually Bash commands)
  • How Claude interacts with your files and tools
  • Opportunities for more advanced hook logic (like intercepting dangerous commands or ensuring tests run before a pull request)

Making abstract processes audible helps demystify Claude’s inner workings and gives you confidence to try more serious customizations.


Beyond Sounds: Unlocking the Full Potential of Hooks

Once you’re comfortable, hooks enable all sorts of productivity and safety improvements:

  • Preventing dangerous commands: Block risky Bash operations like rm -rf before they execute.
  • Running Linters: Automatically trigger code quality checks after edits.
  • Enforcing Test Runs: Ensure tests pass before allowing pull requests.
  • Custom Notifications: Replace unreliable system beeps with tailored sounds or even spoken messages.

Hooks give you deterministic control over Claude Code’s behavior—making your coding environment smarter, safer, and more responsive.


Ready to Try? Resources and Next Steps

You can find all the code and examples from this project at hihigh.ai/hooks.

Whether you want to make Claude meow, beep, or speak in a proper British accent, starting with custom sounds is a delightful way to understand hooks. Once you grasp the basics, you’ll be well-equipped to use hooks for more complex, pragmatic workflows.

What creative uses have you found for Claude Code hooks? Share your ideas and let’s build smarter tools together!

Unlocking AI Superpowers: A Workflow for AI-Assisted Coding with Cloud Code and GitHub

Over the past few weeks, I have been developing a new web app using an AI-assisted coding workflow that leverages Cloud Code and GitHub. This approach has truly unlocked new superpowers in my development process. In this post, I’ll share the workflow I use, why it works, and dive into the details of each phase: planning, creating, testing, and deploying code with AI assistance.


The High-Level AI Coding Workflow

At its core, my workflow follows a simple cycle:

  1. Plan: I create GitHub issues that define the work to be done. Cloud Code uses detailed slash commands to process these issues, breaking down large tasks into small, atomic steps using scratch pads.
  2. Create: Claude Code (the AI assistant) writes the code based on the plan, commits the changes, and opens a pull request (PR).
  3. Test: Testing happens in two ways — running automated test suites and using Puppeteer to simulate browser interactions for UI changes.
  4. Deploy: After tests pass, the PR is reviewed and merged, triggering continuous integration (CI) workflows and deployment.

Once an issue is completed and merged, I clear the AI’s context window to start fresh on the next issue, keeping the process clean and efficient.


Why Use a Workflow Like This?

Writing code is just one part of the software development life cycle (SDLC), which traditionally follows the phases of plan, create, test, and deploy. AI coding assistants like Claude Code excel when integrated into a well-defined workflow that mirrors these phases.

My workflow is heavily inspired by GitHub Flow, a lightweight branching model designed for small teams—which, in this case, is one human developer plus one AI assistant. This approach ensures that AI-generated code is managed, reviewed, and deployed systematically, reducing errors and maintaining code quality.


Phase 1: Creating and Refining GitHub Issues

The process begins with well-crafted GitHub issues outlining precise, granular tasks. Initially, I used dictation tools and Claude to convert high-level ideas into a requirements document, then generated GitHub issues from there.

Key insight: The quality of issues directly impacts AI performance. Early on, I learned that vague or large issues led to suboptimal results. Breaking down work into very specific, atomic issues made it easier for Claude Code to produce accurate and reliable code. This phase requires human attention, acting much like an engineering manager—refining specs, prioritizing tasks, and clarifying requirements.


Phase 2: Setting Up Your Foundation

Before rapid development can begin, you need a solid foundation:

  • Test Suite & Continuous Integration: I set up automated tests and CI using GitHub Actions, ensuring that tests run with every commit Claude makes.
  • Puppeteer for UI Testing: Puppeteer allows Claude to interact with the app’s UI in a browser, simulating clicks and form submissions to verify front-end changes.
  • GitHub CLI: Installing the GitHub CLI enables Cloud Code to interact programmatically with GitHub, handling issues, commits, and PRs.

This setup ensures that AI-generated code is continuously validated, reducing bugs and maintaining stability.


Phase 3: Planning with Custom Slash Commands

The heart of the workflow is a custom slash command I created for Cloud Code to process issues. This command breaks down into four parts: plan, create code, test, and deploy.

  • The planning stage is the most critical and involved. Claude Code uses the GitHub CLI to fetch issue details and searches previous work and pull requests to avoid duplication.
  • I leverage “think harder” prompt techniques to encourage the AI to deeply analyze the task and break it into manageable subtasks documented in scratch pads.
  • This detailed planning guides the AI’s code generation and testing, improving accuracy and relevance.

Phase 4: Creating, Testing, and Deploying Code

Once the plan is ready, Claude Code writes the code and commits changes directly to a feature branch. Then:

  • The test suite runs automatically via CI.
  • Puppeteer performs UI tests if needed.
  • Claude opens a pull request for review.

Human Review & AI Collaboration

While Claude can handle commits and PR creation, human oversight remains essential. I read PRs, leave comments, and sometimes use another custom slash command to have Claude review the PR in the style of coding experts like Sandy Mets, which helps identify maintainability improvements and catch subtle issues.

There have been moments when I trusted Claude to handle everything and merged PRs after tests passed, but I remain cautious. Good tests are my safety net, catching regressions and ensuring that new features don’t break existing functionality.


Managing Context and Workflow Efficiency

After merging a PR, I use a /clear command to wipe Claude’s context window. This “cold start” approach means each issue must be self-contained, with all necessary information included. It keeps token usage efficient and reduces the risk of confusion from lingering context.


Exploring GitHub Actions and Parallel Work with Work Trees

Anthropic recently introduced Claude integration via GitHub Actions, letting you tag Claude directly within GitHub to handle small tasks like copy edits or minor tweaks. However, for large feature development, I prefer using the console version of Claude Code, which gives better control and cost efficiency.

Work Trees for Parallel AI Agents

Work trees allow running multiple instances of Claude on different branches in parallel—like multitabling in online poker. Each instance works independently in its own directory, enabling concurrent feature development.

In practice, I found this approach a bit cumbersome due to repeated permission approvals and extra management overhead. For my current project size, a single AI instance handling one issue at a time works best.


Final Thoughts: Finding Your Balance Between Human and AI Roles

This workflow highlights the complementary roles of human developers and AI assistants:

  • Humans: Spend significant effort in planning, refining issues, and reviewing code to maintain quality and direction.
  • AI: Handles code creation, testing, committing, and even PR reviews, accelerating development speed.

Finding the right balance depends on your project size, complexity, and trust in AI-generated code. With a strong foundation and clear processes, you can empower AI to take on the heavy lifting while you focus on high-level decisions and quality assurance.


Want to Learn More?

If you found this workflow interesting, you might also enjoy my video on Claude Code Pro Tips, where I share deeper insights and practical advice for working effectively with AI coding assistants.


Harnessing AI coding assistants like Claude Code within a structured workflow can transform how you build software—unlocking new efficiencies and superpowers you never thought possible. Give this approach a try, and see how AI can become your ultimate coding partner.

Mastering Claude Code: Pro Tips from a Developer’s Experience

Hey there! I’m Greg, a developer who, over the past few months, has made Claude Code my go-to tool for writing code. In this post, I’ll share some valuable pro tips to help you get the most out of Claude Code, inspired primarily by a fantastic post from Boris Churnney, the creator of Claude Code at Anthropic.

Whether you’re new to Claude Code or looking to deepen your expertise, these insights will speed up your workflow, improve your code quality, and help you manage your projects more efficiently.


1. CLI Basics & Command Line Arguments

Claude Code operates as a powerful command-line interface (CLI) tool. If you’re familiar with bash-based CLIs, you’ll feel right at home.

  • Pass command line arguments to run on startup.
  • Use -p to run Claude Code in headless mode.
  • Chain Claude Code with other CLI tools or pipe data into it.
  • Run multiple instances simultaneously — Claude Code can even launch sub-agents for complex tasks.

This flexibility makes Claude Code a versatile tool that fits neatly into your existing development workflows.


2. Using Images & Screenshots Effectively

Images are a surprisingly powerful part of your interaction with Claude Code:

  • Drag and drop images directly into the terminal on macOS.
  • Use Shift + Command + Control + 4 to take a screenshot and paste it into Claude Code with Control + V (note: it’s Control, not Command).
  • Use images for mockups—design your UI, paste the mockup, then ask Claude Code to build the interface.
  • Use screenshots as a feedback loop: generate code, screenshot the output, then feed it back to Claude for iterative improvement.

For automation, try integrating Puppeteer to programmatically take screenshots of your app and save them locally—a huge time saver.


3. Automating with Puppeteer & MCP Servers

Claude Code can act as both an MCP (Multi-Computer Protocol) server and client, enabling seamless integration with various tools and APIs.

  • Use Puppeteer MCP server locally to automate browsing and screenshots.
  • Connect Claude Code to databases like Postgres via MCP servers.
  • Use MCP servers that wrap APIs for real-time data fetching.
  • Some dev tools (e.g., Cloudflare) provide MCP servers with up-to-date documentation, which Claude Code can leverage to build against the latest docs.

This architecture opens up powerful ways to keep your code in sync with external data and live documentation.


4. Fetching URLs & Leveraging Documentation

Claude Code can fetch and parse URLs, which unlocks real-world knowledge for your coding projects.

For example, Greg built a game for his daughter by feeding Claude Code the rules from unorules.com instead of relying on generic training data. This ensures accuracy and relevance in the generated logic.


5. claude.md: Your Project’s Instruction Manual

One of the most important tools to customize your Claude Code experience is the claude.md file.

  • It’s a prompt loaded with every request, containing project-specific instructions like bash commands, style guides, linting rules, test commands, and repo etiquette.
  • Initialize it automatically with /init in Claude Code, which scans your project and creates a baseline.
  • Add instructions on the fly using the hash sign (#).
  • Use global claude.md files in your home directory for cross-project defaults, and subdirectory-specific claude.md files for finer control.
  • Regularly refactor your claude.md to keep it concise and relevant—avoid redundant or irrelevant information.
  • Use Anthropic’s prompt optimizer tool to improve your claude.md files’ effectiveness.

This file acts like a "living guidebook" for Claude Code, tailoring its behavior to your exact needs.


6. Slash Commands for Reusable Prompt Templates

Slash commands are prompt templates stored in the cloud/commands folder. They let you:

  • Create reusable commands for common tasks like refactoring, linting, or reviewing pull requests.
  • Pass command line arguments that get interpolated into the prompt template, making these commands dynamic and context-aware.

This feature saves time and standardizes workflows across your projects.


7. UI Tips: Navigating Claude Code Smoothly

  • Use Tab to autocomplete files and directories—Claude Code performs better with specific file context.
  • Don’t hesitate to hit Escape to interrupt Claude when it goes off track.
  • Use the undo function to revert the last action and guide Claude back on course.

Interrupting Claude early saves time and frustration, keeping your sessions productive.


8. Version Control: Your Safety Net

One common failure mode is Claude Code becoming too ambitious and making breaking changes.

To avoid this:

  • Use version control religiously.
  • Ask Claude Code to commit after every major change, and let it write your commit messages—they’re often impressively clear and descriptive.
  • Revert often and don’t hesitate to clear conversation history to start fresh with more precise instructions.
  • Install the GitHub CLI for seamless integration, or use the GitHub MCP server as an alternative.
  • Let Claude Code help with PR creation and code reviews.

Version control keeps your projects stable and your workflow safe.


9. Managing Context & Conversation History

Claude Code manages context with an auto-compacting feature that trims conversation history to stay within token limits.

Tips for managing this:

  • Keep an eye on the auto-compact indicator.
  • Compact at natural breakpoints, such as after commits or task completions.
  • Consider clearing history entirely to work with a fresh context.
  • Use scratchpads or GitHub issues as external memory tools to supplement Claude’s context.
  • Monitor token usage carefully if paying per token.

Using these strategies will help maintain Claude Code’s responsiveness and accuracy.


10. Managing Costs Efficiently

Claude Code can be expensive, so cost management is crucial.

  • Upgrade to Claude Max plans ($100 or $200 tiers) to get bundled token usage and better value.
  • Use Cloud Code’s open telemetry support to track usage and costs across teams.
  • Integrate with tools like Data Dog to build dashboards monitoring token consumption.
  • For detailed cost management, check out Martin AMP’s blog post linked below.

In Greg’s experience, moving to a $100 Claude Max plan made the cost manageable and worthwhile.


Final Thoughts

Claude Code is a powerful and flexible coding assistant when used effectively. From CLI tricks to advanced MCP server integrations, image-based workflows, and cost management, the tips above should help you harness the full potential of Claude Code.

If you want to dive deeper, make sure to read Boris Churnney’s original post and explore the additional resources linked below.

Happy coding!


Useful Links

  • Boris Churnney’s Claude Code Pro Tips [Link]
  • Anthropic Prompt Optimizer Tool [Link]
  • Martin AMP’s Cost Management Blog Post [Link]

Feel free to leave your questions or share your own Claude Code tips in the comments!

How to Build a Real-Time Web Research Bot Using Anthropic’s New Web Search API

In the rapidly evolving world of AI, staying updated with the latest information is crucial. Traditional AI models often rely on static training data, which can be months or even years old. That’s why Anthropic’s recent announcement of web search capabilities via their cloud API is a game changer. It enables developers to build AI-powered research bots that fetch real-time data from the web — without relying on external scraping tools or additional servers.

In this blog post, we’ll explore how to leverage Anthropic’s new web search API to build a real-time research assistant in Python. We’ll cover a demo example, key implementation details, useful options, and pricing considerations.


What’s New with Anthropic’s Web Search API?

Before this update, building a research bot that accessed up-to-date information meant integrating third-party tools or managing your own scraping infrastructure. Anthropic’s web search API simplifies this by allowing direct querying of the web through their cloud API, keeping everything streamlined in one place.

For example, imagine wanting to know about a breaking news event that happened just hours ago — such as the recent selection of the first American Pope (recorded May 8th, 2025). Since this information isn’t part of the AI’s training data, it needs to perform a live web search to generate an accurate and current report.


Building Your First Web Search Request: A “Hello World” Example

Getting started is straightforward. Here’s an overview of the basic steps using the Anthropic Python client:

  1. Set your Anthropic API Key as an environment variable. This allows the client to authenticate requests seamlessly.

  2. Instantiate the Anthropic client in Python.

  3. Send a message to the model with your question. For example, “Who is the new pope?”

  4. Add the tools parameter with web_search enabled. This tells the model to access live web data.

Here’s a snippet summarizing this:

```python
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT

client = Anthropic()

response = client.completions.create(
model="claude-3",
messages=[{"role": "user", "content": "Who is the new pope?"}],
tools=["web_search"], # Enable web search tool
max_tokens=1000
)

print(response.choices[0].message.content)
```

Without web search enabled, the model might respond with outdated information (e.g., “As of the last update, it was Pope Francis”). But with web search active, it fetches the latest details, complete with citations from recent news sources.


Understanding the Web Search Response

The response from the API when using web search is richer and more complex than standard completions. It includes:

  • Initial text from the model indicating it is performing a search.
  • Tool usage details showing search queries and pages found.
  • Encrypted content blocks representing scraped snippets (Anthropic encrypts these to avoid direct scraping exposure).
  • Summarized text with citations — a distilled answer referencing URLs, page titles, and quoted text snippets.

Parsing this response can be a bit challenging. The Python client lets you convert the response to a dictionary or JSON format for easier inspection.

For example, you can iterate over the response’s message blocks, extract the main text, and gather citations like URLs and titles. This lets you assemble a report with clickable sources, ideal for building research assistants or automated reporting tools.


Improving Performance with Streaming

Waiting 20 seconds for a full response might be too slow for some applications. Anthropic supports streaming responses through an asynchronous client.

Using the async client, you can receive partial results as they become available and display them in real-time, improving user experience in chatbots or interactive assistants.


Customizing Search Domains: Allowed and Blocked Domains

Anthropic’s API offers parameters to restrict searches to certain domains (allowed_domains) or exclude others (blocked_domains). For example, if you only want information from Reuters, you can specify that in your request:

python tools=[{"name": "web_search", "allowed_domains": ["reuters.com"]}]

However, note that some domains are off-limits due to scraping restrictions (e.g., BBC.co.uk, Reddit). Trying to search those will result in an error.

You can use either allowed_domains or blocked_domains in a single request, but not both simultaneously.


Pricing Overview: How Much Does It Cost?

Anthropic’s web search API pricing stands out as very competitive:

  • $10 per 1,000 searches plus token usage for the normal API calls.
  • Compared to OpenAI’s web search pricing of $30 to $50 per 10,000 calls, Anthropic’s is more affordable.

The pricing difference might be due to different search context sizes or optimizations, but it makes Anthropic a cost-effective choice for integrating live web data.


Wrapping Up

Anthropic’s new web search API opens exciting possibilities for developers building AI applications that require fresh, real-time data from the web. With simple integration, customizable domain filters, streaming support, and competitive pricing, it’s a compelling option for research bots, news aggregators, and knowledge assistants.

If you want to try this out yourself, check out the Anthropic Python client, set your API key, and start experimenting with live web queries today!


Useful Links


Author: Greg
Recorded May 8th, 2025

Feel free to leave comments or questions below if you want help building your own web research bot!

How to Build a Remote MCP Server with Python and FastMCP: A Step-by-Step Guide

In the rapidly evolving world of AI and large language models (LLMs), new tools and integrations continue to push the boundaries of what’s possible. One of the most exciting recent developments is Enthropic’s announcement of remote MCP (Model Control Protocol) server support within Claude, an AI assistant platform. This breakthrough means that users can now connect to MCP servers simply by providing a URL—no complex local setup or developer-level skills required.

In this blog post, we’ll explore what remote MCP servers are, why they matter, and how you can build your own using Python and the FastMCP framework. Whether you’re a developer or an AI enthusiast, this guide will give you the knowledge and tools to create powerful, accessible AI integrations.


What is an MCP Server?

An MCP server serves as a bridge between large language models and external tools or data sources. This enables LLMs like Claude to perform tasks or fetch real-time information beyond their static knowledge base. Traditionally, MCP servers were local setups requiring developer expertise to configure and maintain, limiting their accessibility.


Why Remote MCP Servers are a Game-Changer

Remote MCP servers allow users to connect to MCP servers hosted anywhere via a simple URL. This innovation dramatically lowers the barrier to entry, making it easier for less technical users to enhance their AI assistants with custom tools. For example, a remote MCP server can provide up-to-the-minute data like the current time, weather, or stock prices—capabilities that standard LLMs often lack due to knowledge cutoffs.

Claude’s new integration support means you can now:

  • Add MCP servers by entering a URL in Claude’s settings.
  • Automatically discover available tools and their parameters.
  • Seamlessly invoke those tools during conversations with Claude.

This marks a significant step toward more interactive, capable AI assistants.


Demo: Adding a Current Time Tool to Claude

To illustrate the power of remote MCP servers, here’s a quick example:

  1. Problem: Claude cannot provide the current time because its knowledge is frozen at a cutoff date.
  2. Solution: Create a remote MCP server that returns the current date and time.
  3. Integration: Add the MCP server URL to Claude’s settings under “Integrations.”
  4. Usage: Ask Claude, “What is the current time?” Claude recognizes it has access to the time tool, invokes it with the correct parameters (like time zone), and returns an accurate, up-to-date answer.

This simple enhancement vastly improves Claude’s utility for real-world tasks.


Building Your Own Remote MCP Server in Python with FastMCP

Step 1: Set Up Your Environment

Begin by visiting gomcp.com or the FastMCP GitHub repository for documentation and code examples.

Install FastMCP via pip:

bash pip install fastmcp

Step 2: Create the MCP Server

Here’s a basic MCP server script that provides the current date and time:

```python
from fastmcp import MCPServer
from datetime import datetime
import pytz

server = MCPServer(name="DateTime Server", instructions="Provides the current date and time.")

@server.tool(name="current_datetime", description="Returns the current date and time given a timezone.")
def current_datetime(time_zone: str = "UTC") -> str:
try:
tz = pytz.timezone(time_zone)
now = datetime.now(tz)
return now.strftime("%Y-%m-%d %H:%M:%S %Z")
except Exception as e:
return f"Error: {str(e)}"

if name == "main":
server.run()
```

Step 3: Run Locally and Test with MCP Inspector

FastMCP offers an MCP Inspector tool for debugging and testing your server:

bash npx fastmcp inspector my_server.py

This GUI lets you invoke your tools and view responses directly, providing a deterministic way to debug interactions with your MCP server.

Step 4: Deploy as a Remote MCP Server

To make your MCP server accessible remotely, you need to use a transport protocol suitable for networking.

  • The default stdin/stdout transport works locally.
  • For remote access, use Server-Sent Events (SSE) transport or, soon, the more efficient streamable HTTP transport (currently in development).

Modify your server code to use SSE transport and deploy it on a cloud platform such as Render.com. Assign a custom domain (e.g., datetime.yourdomain.ai) pointing to your deployment.

Once deployed, add your server URL in Claude’s integrations, and it will be ready for use remotely.


The Future of MCP Servers

The adoption of remote MCP servers is poised to explode, as they become far easier to create and integrate with AI assistants like Claude. This will likely spur more companies to launch their own MCP servers, offering a diverse ecosystem of tools accessible via URLs.

For developers, this is an exciting time to dive into MCP and FastMCP development. Even those with limited coding experience can now build meaningful AI enhancements quickly.


Final Thoughts

  • MCP servers empower AI models to access real-time data and perform specialized tasks.
  • Remote MCP servers eliminate the technical hurdles of local setup.
  • FastMCP and Python provide a straightforward path to building your own MCP servers.
  • Claude’s new integrations make adding MCP servers as simple as entering a URL.
  • The future will see more widespread adoption and innovation in MCP technology.

If you want to stay ahead in AI tooling, start experimenting with FastMCP today. Build your own remote MCP server, connect it to Claude, and unlock new capabilities for your AI assistant.

Happy coding!


Resources


Have questions or want to share your MCP server projects? Drop a comment below or connect with me on Twitter!

Why Cloud Code Outshines OpenAI’s Codeex: A Developer’s Perspective

Hey there! I’m Greg, a developer who has spent hundreds of dollars experimenting with AI-powered coding assistants over the past few months. Lately, I’ve made Cloud Code my go-to coding agent, especially when starting new projects or navigating large, older codebases. With the recent launch of OpenAI’s Codeex, I was eager to give it a shot and pit it against Cloud Code in a head-to-head comparison. Spoiler alert: Codeex fell short in several key areas.

In this blog post, I’ll share my firsthand experience with both tools, highlight what OpenAI needs to improve in Codeex, and explain why developer experience is crucial for AI coding assistants to truly shine.


First Impressions Matter: The Developer Experience

Right from the start, Codeex’s developer experience felt frustrating. Although I have a Tier 5 OpenAI account—which is supposed to grant access to the latest GPT-4 models—Codeex informed me that GPT-4 was unavailable. Instead of gracefully falling back to a supported model, the system simply failed when I tried to use GPT-4 Mini.

To make matters worse, the interface for switching models was confusing. I had to use a /help command to discover a /model command with a list of options ranging from GPT-3.5 to Babbage and even DALL·E (an image generation model that doesn’t belong here). Most of these options didn’t work with the product, so I was left guessing which model to pick. This was a baffling experience—why show options that don’t actually work? It felt like a basic user experience bug that should have been caught during testing.

For developers, the first interaction with a tool should be smooth and intuitive—no guesswork, no dead ends. Sadly, Codeex made me jump through unnecessary hoops just to get started.


API Key Management: A Security and Usability Concern

Cloud Code shines in how it manages API keys. It securely authenticates you via OAuth, then automatically stores your API key in a local config file. This seamless process means you can focus on coding without worrying about environment variables or security risks.

Codeex, on the other hand, expects you to manually set your OpenAI API key as a global environment variable or in a .env file. This approach has several drawbacks:

  • Security Risk: Having a global API key in your environment exposes it to any local script or app, increasing the chances of accidental leaks.
  • Lack of Separation: You can’t easily dedicate a separate API key for Codeex usage, which complicates cost tracking and project management.
  • Inconvenience: Managing environment variables across multiple projects can become tedious.

Cloud Code’s approach is more secure, user-friendly, and better suited for developers juggling multiple projects.


Cost Management: Transparency and Control Matter

AI coding assistants can get expensive, and managing usage costs is critical. Cloud Code offers helpful features to keep your spending in check:

  • /cost Command: View your session’s spend anytime.
  • /compact Command: Summarize and compress chat history to reduce token usage and lower costs.

Codeex lacks these features entirely. There is no way to check how much you’ve spent during a session or to compact conversation history to reduce billing. This opacity can lead to unpleasant surprises on your bill and makes cost management stressful.


Project Context Awareness: Smarter by Design

One of Cloud Code’s standout features is its ability to scan your project directory on startup, building an understanding of your codebase. It lets you save this context into a claw.md file, so it doesn’t have to reanalyze your project every time you launch the tool. You can even specify project-specific preferences and coding conventions.

Codeex, by contrast, offers zero context-awareness upon startup. It simply opens a chat window with your chosen model and waits for input. This puts the burden on the developer to manually introduce project context, which is inefficient and time-consuming.

For a coding agent, understanding your existing codebase from the get-go is a game-changer that Codeex currently misses.


User Interface: Polished vs. Minimal Viable

Cloud Code’s command-line interface (CLI) is thoughtfully designed with clear separation between input and output areas, syntax highlighting, and even color schemes optimized for color-blind users. The UI feels intentional, refined, and comfortable for extended use.

Codeex feels like a bare minimum implementation. Its output logs scroll continuously without clear visual breaks, it lacks syntax highlighting, and it provides only rudimentary feedback like elapsed wait time messages. This minimalism contributes to a frustrating user experience.


Stability and Reliability: Crashes Are a Dealbreaker

Cloud Code has never crashed on me. Codeex, unfortunately, has crashed multiple times, especially when switching models. Each crash means reconfiguring preferences and losing all previous session context—a major productivity killer.

Reliability is table stakes for developer tools, and Codeex’s instability makes it feel unready for prime time.


Advanced Features: MCP Server Integration

Cloud Code supports adding MCP (Machine Control Protocol) servers, enabling advanced use cases like controlling a browser via Puppeteer to close the feedback loop by viewing changes in real-time. This kind of extensibility greatly expands what you can do with the tool.

Codeex currently lacks support for MCP servers, limiting its potential for power users.


The Origin Story: Why Polished Tools Matter

During a recent Cloud Code webinar, I learned that Cloud Code began as an internal tool at Anthropic. It gained traction within the company, prompting the team to polish it extensively before releasing it publicly. This internal usage ensured a high-quality, battle-tested product.

In contrast, Codeex feels like it was rushed to market with minimal internal adoption and testing. With just a couple of weeks of internal use and intentional polish, Codeex could improve dramatically.


Final Thoughts: Potential vs. Reality

I have not even touched on the core coding ability or problem-solving skills of Codeex’s models, such as GPT-4 Mini plus codecs. It’s possible that, once the bugs and UX issues are ironed out, Codeex could outperform Cloud Code at a lower cost.

But right now, the frustrating user experience, instability, poor key management, and lack of cost transparency prevent me from fully engaging with Codeex. A well-designed developer experience isn’t just a nice-to-have; it’s essential to unlocking the true power of AI coding assistants.


What OpenAI Needs to Do to Bring Codeex Up to Par

  1. Graceful Model Fallback: Automatically switch to a supported model if the default is unavailable.
  2. Clear and Accurate Model List: Only show models that actually work with the product.
  3. Secure and Convenient API Key Management: Implement OAuth or a dedicated API key setup for the tool.
  4. Cost Transparency: Add commands or UI elements to track session spending and manage token usage.
  5. Project Context Awareness: Automatically scan and remember project details to save time and costs.
  6. Stable, Polished UI: Improve the CLI interface with clear input/output zones, syntax highlighting, and accessibility options.
  7. Reliability: Fix crash bugs to ensure smooth, uninterrupted workflows.
  8. Advanced Feature Support: Enable MCP servers or equivalent extensibility to boost functionality.

Conclusion

AI coding assistants hold incredible promise to revolutionize software development, but only if they respect developers’ time, security, and workflows. Cloud Code exemplifies how thoughtful design and polish can make a tool truly empowering.

OpenAI’s Codeex has potential, but it needs significant improvements in developer experience and stability before it can compete. I look forward to seeing how it evolves and hope these insights help guide its growth.


Thanks for reading! If you’ve tried either Cloud Code or Codeex, I’d love to hear about your experiences in the comments below. Happy coding!

Cursor vs. Claude Code: Which AI Coding Agent Reigns Supreme?

The emergence of AI-powered coding agents has been one of the most exciting developments for software developers recently. Just this past week, two major players—Cursor and Anthropic’s Claude Code—launched their coding agents simultaneously. Intrigued by which tool might better serve developers, I decided to put both to the test on a real-world Rails application running in production. Here’s a detailed breakdown of my experience, comparing their user experience, code quality, cost, autonomy, and integration with the software development lifecycle.


The Test Setup: A Real Rails App with Complex Needs

My project is a Rails app acting as an email "roaster" for GPTs—essentially bots that process and respond to emails with unique personalities. The codebase is moderately complex and had been untouched for nine months, making it perfect for testing AI assistance on:

  1. Cleaning up test warnings and updating gem dependencies.
  2. Replacing LangChain calls with direct OpenAI API usage.
  3. Adding support for Anthropic’s API.

Both agents used the same underlying model—Claude 3.7 Sonnet—to keep the comparison fair.


User Experience (UX): Terminal Simplicity vs. IDE Integration

Cursor:
Cursor’s agent is integrated into a fully featured IDE and has recently made the agent the primary way to interact with the code. While this offers powerful context and control, I found the interface occasionally clunky—multiple “accept” buttons, cramped terminal panes, and confusing prompts requiring manual clicks. The file editor pane often felt unnecessarily large given that I rarely needed to manually tweak files mid-action.

Claude Code:
Claude Code operates as a CLI tool right in the terminal. You run commands from your project root, and it prompts you with simple yes/no questions to confirm each action. This single-pane approach felt clean, intuitive, and perfectly suited for delegating control to the agent. The lack of a GUI was a non-issue given the agent’s autonomy.

Winner: Claude Code for its streamlined, efficient command-line interaction.


Code Quality and Capability: Documentation Search Matters

Both agents produced similar code given the same model, but Cursor’s ability to search the web for documentation gave it a notable edge. When adding Anthropic support, Claude Code struggled with API syntax and ultimately wrote its own HTTP implementation. Cursor, however, seamlessly referenced web docs to get the calls right, rescuing itself from dead ends.

Winner: Cursor, thanks to its web search integration.


Cost: Subscription vs. Metered Pricing

  • Claude Code: Approximately $8 for 90 minutes of work on these tasks. While reasonable, costs could add up quickly for frequent use.
  • Cursor: $20/month subscription includes 500 premium model requests; I used less than 10% of that for this exercise, roughly costing $2.

Winner: Cursor, offering more usage for less money and a simpler subscription pricing model.


Autonomy: Earning Trust with Incremental Permissions

Claude Code shines here with a granular permission model. Initially, it asks for approval on commands; after repeated approvals, it earns trust to perform actions autonomously. By the end of my session, it was acting independently with minimal prompts.

Cursor, in contrast, lacks this “earned trust” feature. It repeatedly asks for confirmation without a way to grant blanket permissions. Given the nature of coding agents, I believe this is a feature Cursor should adopt soon.

Winner: Claude Code for smarter incremental permissioning.


Integration with Software Development Lifecycle

I emphasize test-driven development (TDD) and version control (Git), so how each agent handled these was crucial.

  • Claude Code: Excellent at generating and running tests before coding features, ensuring quality. Its commit messages were detailed and professional—better than any I’ve written myself. Being a CLI tool, it felt natural coordinating commands and output.

  • Cursor: While it offers a nice Git UI within the IDE and can autogenerate commit messages, these were more generic and less informative. Its handling of test outputs in a small terminal pane felt awkward.

Winner: Claude Code, for superior test and version control workflow integration.


Final Verdict: Use Both, But Lean Towards Claude Code—for Now

Both agents completed all three complex tasks successfully—a testament to how far AI coding assistants have come. It’s remarkable to see agents not only write code but also tests and meaningful commit messages that improve project maintainability.

That said, this is not a binary choice. I recommend developers use both tools in tandem:

  • Use Cursor for day-to-day coding within your IDE, benefiting from its subscription model and web documentation search.
  • Use Claude Code for command-line driven tasks that require incremental permissions, superior test integration, and detailed commit management.

For now, I personally prefer Claude Code for its user experience, autonomy model, and lifecycle integration. But Cursor’s rapid iteration pace means it will likely close these gaps soon.


Takeaway for Developers

If you’re a software developer curious about AI coding agents:

  • Get the $20/month Cursor subscription to familiarize yourself with agent-assisted coding.
  • Experiment with Claude Code in your terminal to experience granular control and trust-building autonomy.
  • Use both to balance cost, control, and convenience.
  • Embrace AI coding agents as powerful collaborators that can help you break through stalled projects and increase productivity.

The future of software development is here—and these AI coding agents are just getting started.


Have you tried Cursor or Claude Code? Share your experiences and thoughts in the comments below!

đŸ“č Video Information:

Title: Obsidian + MCP + SuperWhisper: Write FASTER with AI
Duration: 05:25

Short Summary:

In the video, Greg explains how combining Claude Desktop, MCP servers, and Obsidian streamlines workflows, particularly for writing tasks. He describes using Claude Desktop (which can act as an MCP client) to interact with his Obsidian notes via a file system MCP server. By dictating responses to interview questions using Super Whisper, having Claude organize and clean up the responses, and then making final edits in Obsidian, Greg significantly reduced the effort and time needed to complete a large writing project. He suggests that this integration, possibly enhanced with tools like Git for version control, is highly effective for managing and editing notes or interview responses.