croppie holidays
you're getting claude code for christmas
it’s no secret that i get antsy when i don’t have an active project. this year, with a new career under my belt and finally calling my own shots, i decided it was time to do a little open source software with my holiday time.
for Beep, i’ve been using Croppie with stimulus to adjust and edit images. auto insurance and crms both involve some degree of image editing and storage - not necessarily cascading walls of images, but performance being my north star, i thought… well, hey, Croppie hasn’t had a release in over five years and i bet i can do better than ~50KB.
i ultimately created a fork which is more of a completely new project with a fork relationship. the project was rewritten in typescript, made esm-only, and supports the modern pointer events api. it’s ~5KB gzipped (a 90% reduction) and we’re already dogfooding it in production.
https://github.com/bayinformatics/croppie
we’re using bun for built-in testing, lost pixel for visual regression, and have no external dependencies. we also support dark mode. sad bois unite.
it’s a small project, and i was able to crank it out pretty fast. in about 72 hours i had a drop-in replacement. the entire thing was written by me in neovim with the claude code plugin, mostly for confirming functional parity and reviewing my code.
this was one of the most excellent ai coding experiences i’ve had. “does my implementation match the original behavior here?” was an instant sanity check that compressed the time while still leaving me feeling in control.
i’ve been evaluating ai coding tools for about a year now and am genuinely shocked at how useful they are. there’s no way development at Beep would be moving as fast as it does without ai coding assistance. that’s not to say we just type a prompt into claude code, hit enter, and walk away - as far as i can tell, one-shotting sufficiently complex software is not in the cards any time soon.
most value for me has been in asking for a socratic tutor to help hone my architecture or patterns. here’s an example prompt i used when rewriting Croppie:
“in my test mocks, i want to use happy-dom to have a lightweight test bed. some functionality appears missing like canvas mocking. i’d like you to look at my implementation of the mock canvas and ask me about my choices and architecture”
with small scope, a clear goal, and one decision-maker: me. this worked beautifully.
so why am i skeptical about ai for oss teams at large?
oss discourse tends to oscillate between “all the code will be written with ai” and “ai is plagiarism.” i don’t subscribe to that black-and-white thinking, but i do see issues that are more practical than philosophical.
notification fatigue is real. coderabbit, i’m looking at you. no, i made that decision intentionally - go away. github code quality, you too. every pr becomes a wall of robot opinions nobody asked for. the irony of ai tooling meant to save time creating more noise to wade through is not lost on me.
there’s no shared context. my chats are private. the ai doesn’t know why we made the decisions we made, and neither does the next engineer reading the code. although i try to document things via changelog, ai adds even more of a “meh, ship it” button to the average engineer. suddenly documentation requirements or some sort of wiki become necessary just to preserve the reasoning that used to live in slower, more deliberate code review. i like what ampcode is doing with shared team context and logs; that feels like the right direction.
signal-to-noise inverts with scale. when i’m the only reviewer, ai augments my judgment. when there’s a team, it creates noise for everyone. the same generic suggestions surface things humans already know but chose to accept. it doesn’t know when to shut up.
the pattern i keep landing on: ai as personal assistant scales. ai as team process doesn’t.
if you’re working on a small oss project over the holidays, ai tooling is genuinely great for that. just don’t inflict it on your teammates’ inboxes.
how are you using ai in your open source work? i’m curious whether others have found ways to make it work at scale, or if we’re all just turning off the bots.

