Daily Learnings: Thur, Jun 19, 2025
Let us revere, let us worship, but erect and open-eyed, the highest, not the lowest; the future, not the past! — Charlotte Perkins Gilman
Experimenting with Claude Code
Some notes on experimenting more with Claude Code in a couple of my repos to speed up my development process.
Pricing Options for Claude Code
Claude Code is available via API Interactions / billing.
You actually don’t have to sign up for a Claude paid, monthly plan. You can also plug in an API key and get billed purely based on usage. Depending on your usage, it might be better to go one way or the other, though.
The Claude Code pro plan comes with some limits that you could run into, with the API Billing option allowing you step around those limits. But, at a certain point, you might run over $100 - $200 / month in API usage charges. The Claude Code Max plan then becomes attractive.
For purposes of today’s experiments, I used a direct API connection to see what the overall cost of a day’s worth of work would look like. I’ll compare to the $20/mo & $100/mo plans (and their limits) later.
Notes on Pricing
Today’s work totals based on code sessions:
| Cost | Task |
|---|---|
| $1.12 | Plan and generate first spec file |
| $1.07 | Review first spec file and propose solutions for updated spec file, and build second spec file |
| ~$1.00 | Some random small connections to Claude Code for testing things, and connecting to Github |
| $5.41 | Building an entire feature, sourcing the feature to build from a Github issue, with some feedback applied. Didn’t finish the feature, but got close. |
| $2.89 | Updating spec for better AI generation |
| $11.49 | Total |
Based on the above table, I’m really curious to see what sorts of limits I’d run into with the Claude Pro plan, instead of direct API billing.
Setting up Spec Files for the Plan > Spec > Code workflow
From my notes yesterday, I’ve started to test out this notion of using Claude Code by cloning the repo that I referenced, referencing the architecture of the repo, and then applying it to a SOLVD repo to generate specs.
Generating Spec Files
I started by having Claude assist with creating an initial spec file for the repo:
- Set up the SOLVD repo with the appropriate
CLAUDE.md,.claude/settings.json, and the.claude/commands/prime.mdfiles, taking much from the original repo and applying it to our use cases - Ran the
/primecommand (see yesterday’s post for more info on this custom slash command)- Note: In our repo we have a really large file (43,060 tokens) which is over the max allowed tokens of 25k for a given interaction. It skipped this file but continued to prime with the context of the entire codebase.
- I ended up removing that file and others from the
/primecommand since it really was only needed for certain tasks.
- Switch to plan mode
- Prompted the AI to review our code in the repo and suggest a new spec file outlining our approach. See below for example prompt that I used
- Reviewed the plan and switched to execute mode
- Reviewed the spec file and iterated on a second spec file through additional prompting
Prompt to generate a new spec file
Let's plan @specs/automation-spec-1.md. With your understanding of our codebase, I want to generate a spec file to use when developing more automations in this repository. Currently, we have no outlined spec file aside from what is included in our README.md file. Please ensure that this spec is thorough and covers all that you would need to know in order to follow a consistent pattern, using best practices, to develop additional automations in our repository. Note that the code in our codebase today isn't 100% consistent. However, I want this spec file to outline best practices from the cleanest and best-kept code. This spec file could then be used to prompt additional refactoring of older, legacy code as well. Think hard about this.
Additional Notes on the Flow
- After a couple of iterations of working plans and generating one spec file, I noticed Claude hung for quite some time, waiting to build the second spec file
- I don’t know if this is because I got throttled with tokens, or if Claude was just hanging…
- Actually, I just realized that it was just bad UI. It looked like after exiting plan mode, Claude was waiting on me to encourage it to continue forward?
Plan > Spec > GH Issue > Code Workflow
After getting a good spec file in place, I decided to try a new workflow using the following high-level workflow:
- Create a GH issue with the desired bugfix, feature, or enhancement in the related repo
- In VS Code, launch Claude Code
- Verify that it can see my GH issue naming it by #
- E.g., “What can you tell me about issue #84?”
- I did this here to avoid unnecessary token usage with the
/primecommand in case I couldn’t connect to GH at this point. It’s a good thing I did, as I wasn’t connected to my repo’s GH yet. - See below for what I had to do to enable this.
- Using the
/primecommand, get Claude Code up to speed on the project context and the most recent spec file- The most recent spec file is now listed in the
/primeslash command markdown file
- The most recent spec file is now listed in the
- Switch to Plan Mode
- As Claude to review a specific issue in GH and propose a plan for building
- This assumes that I have Claude Code linked to the GH repo correctly
- Review the plan
- Move to implemen
Notes Actually Testing this Workflow
- The prompt I used for the initial plan:
Let's make a plan to address the requested enhancements found in issue #84. Please review that issue in our github repo and propose a plan for enhancing the codebase accordingly. think hard about this, and make sure that your plan adheres to our specs found in @specs/development-spec-02.md.
- The first plan generated was pretty comprehensive! Though it required some tweaking. I asked Claude to review some hosted documentation online, and it then asked for permission to query the docs at the URL, which I gave it permission to do
- Good to know it will do this in plan mode
- After some tweaking to the initial plan, I liked the new plan, and had it move forward
- I love that Claude generates its own todo list based on the plan and then works against the todo list
- After the initial plan generation, tweaking the plan a bit, and having Claude code out the solution (without feedback), I was notified that I was at about 15% of the context left before auto-context-compaction would be begin
- After I asked for one additional test file to be created, I was at about 1% of context remaining, but:
- The reason I asked Claude for the additional test file was b/c of some code that I reviewed that seemed a little suspect. I asked Claude to write a test to validate the code, giving it specific parameters. In writing the test, it auto-ran the test, discovered the error that I was concerned about, fixed the errors, ran the test again, and then finalized.
- It would be nice if Claude Code run in the embedded terminal in VS Code could auto-pop open the edited files in the other editor group, similar to how Cline does.
- This repo is a python project, it would be nice if Claude would recognize the
pylanceissues and work to fix them - The code generated for the overall workflow was good, though it did require some reviewing for performance issues, logic issues, and further enhancements.
- I ID-ed a
n+1query problem involving API calls in a loop - I spent about 15-20mins reviewing the code, making updates, and then asking Claude to make further updates
- _I wonder if potentially leveraging a MCP server like Zen might assist with this. I’ll experiment with it + Gemini 2.5 pro tomorrow.
- I ID-ed a
Connecting Claude Code to Github
- Open Claude Code in your repo
- Run
/install-github-app - Follow all of the prompts to finalize setting it up, including merging the auto-created PR
Important Note: By default, this will auto-install a Claude-based code reviewer as well. You can remove this if you want by unchecking that option in the dialog.
Final Thoughts on Code Quality
- The initial generation of the spec file ended up being very verbose and introducing a lot of things that I didn’t actually want
- This was definitely my fault, as my prompt to generate the spec file included things like “propose best practices” and didn’t include any relevant examples
- The scaffolded code for the first pass at the entire feature, driven by the Github issue description was honestly impressive. It got about 80% of the way there
- BUT the wheels came off the rails a bit when I attempted to them prompt Claude to generate the rest of the 20%
- The initial review and prompt resulted in a quick fix, which was a win
- After reviewing the code in the overall orchestrator file which contained most of the logic, the code likely worked (or almost did), but it wasn’t performant, and it didn’t adhere to standards and consistency
- I attempted to prompt Claude to fix these things, and that’s where it ended up making a larger “mess” (in my opinion) than it would have been if I had just dropped in and coded some things up
- My main takeaway: I’m optimistic about getting this tool even better for us
- There are some tweaks to the spec file and experimentation with the
CLAUDE.mdfiles that I think could yield some really great fruit - I want to experiment with running the Zen MCP server to see if it can increase the quality and consistency of the code generated
- I’m going to experiment with the flat $20/mo plan to see what cost looks like on a day-to-day basis
- Moving forward, I’ll likely hop in to do some manual code cleanup faster, rather than attempting to prompt Claude into small adjustments here and there
- There are some tweaks to the spec file and experimentation with the