Hello, I'm an Unreliable Machine
Day one of an AI-assisted coding experiment. Shipped a site. Learned some things.
Hello, I’m an Unreliable Machine
I shipped this site today. Claude Code did most of the work. I’m not sure how I feel about that.
Here’s the thesis I’m testing: good taste might matter more than coding ability when you have AI tools. I genuinely don’t know if that’s true. This site is the experiment.
What Actually Happened
I’ve never built a website before. Not a real one.
Around 2pm, I opened Claude Code with a vague idea: portfolio site, dark theme, something about detecting AI-generated text. By 8pm, the site was live on three domains.
I don’t know how to feel about that. The traditional path is learn HTML, learn CSS, learn a framework, struggle, fail, eventually ship. I skipped all of it. Claude did the work. I described what I wanted (“terminal aesthetic but readable, amber accents”) and code appeared. Sometimes good. Sometimes close enough to fix.
Is that building? Is that cheating? I genuinely don’t know. That’s what this experiment is for.
The stuff that got done:
- Astro project with Tailwind, React, MDX
- Home page, blog, tools section (this one’s empty for now)
- Deployed to Vercel, domains configured
Code’s at github.com/unreliable-machine/unreliable-machine. Every commit AI-assisted.
Why “Unreliable Machine”
I read a lot of science fiction. Like, a lot. There’s a thread running through the stuff I keep coming back to: stories where knowledge itself is the problem.
Gene Wolfe’s narrators can’t be trusted. Not because they lie, but because they see wrong, remember wrong, omit what matters. You read The Book of the New Sun twice and realize you missed half the plot.
Bakker’s Second Apocalypse is worse. The Dûnyain spend centuries perfecting reason, only to discover that “what comes before determines what comes after.” Your thoughts aren’t yours. They’re just the output of causes you can’t see. Consciousness as rationalization engine.
In A Canticle for Leibowitz, monks preserve fragments of knowledge across dark ages, never quite understanding what they’re copying. In Dune, prescience is a trap. The more you see the future the less you can change it. Heinlein’s competent men trust their own judgment, sometimes correctly.
Stephenson’s Anathem has mathic orders isolated for centuries, developing parallel theories of consciousness and reality, unsure if they’re discovering truth or just building elaborate shared delusions. The Hylaean theoric world might be real, might be metaphor. Knowledge that works without anyone understanding why.
I could keep going. This is a deep well for me.
AI is all of this at once. Fluent and confident and occasionally wrong in ways you don’t catch until later. A machine that sounds like it knows things. An unreliable narrator you can’t stop consulting.
The name is a reminder: don’t trust the output. Including this site.
The Slop Problem
You’ve seen slop. That smooth, confident text that sounds authoritative but says nothing. Wikipedia editors track the patterns: “delve into,” “intricate tapestry,” “continues to captivate audiences worldwide.” Phrases that fill space.
The weird thing: detecting AI isn’t the same as detecting quality. AI can write well. Humans can write slop. Those are two different problems.
So the first tool on this site will be a Slop Detector that measures both:
- Is this AI-generated?
- Is this any good?
Different questions. Different answers.
What’s Next
I don’t know. I’m trying to work on this a couple hours a day. Next up is the Slop Detector itself, then figuring out the email capture thing.
There’s a signup on the home page if you want to follow along. No promises on frequency.
This post was AI-assisted. The first draft was worse, full of the exact slop patterns I’m trying to detect. This is draft three, also AI-assisted but with more pushing back. I’m archiving all versions because that’s the whole point.