← Back to Blog

Unit Tests for Non-Developers: How to Add Tests to Your Cloudflare Pages Project with Claude Code

A unit test is a small piece of code that checks whether another piece of code works correctly. Claude Code can generate unit tests for an existing project by reading source files, identifying pure functions (functions with inputs and outputs but no side effects), and writing Vitest test cases for each. In one real project, Claude Code wrote 52 tests in 30 minutes covering validation, parsing, and edge cases. The key architectural step: extract business logic from API handlers into separate utility files that are easy to test.

Your site works. Paying customers are using it. You haven’t broken anything in a while. So why would you add tests now?

That’s where we were with the Code for Creatives platform: roughly 2,300 lines of business logic across 10 API handlers and a 1,035-line auth script, serving an active paying cohort. Zero tests. And we added 52 of them in about 30 minutes using Claude Code.

Here’s what we learned, and how you can do the same thing.

What is a unit test?

A unit test is a small piece of code that checks whether another small piece of code works correctly.

The “unit” is whatever chunk of logic you’re testing. It could be a function that validates an email address, a function that parses a date range, or a function that splits a full name into first and last. One test checks one thing.

A basic test looks like this:

test('validates a real email address', () => {
  expect(validateEmail('[email protected]')).toBe(true);
});

test('rejects a fake email address', () => {
  expect(validateEmail('not-an-email')).toBe(false);
});

The expect(...).toBe(...) part is called an assertion. You’re asserting that the output should equal some value. If it does, the test passes. If it doesn’t, the test fails and tells you exactly what went wrong.

A test runner is the tool that finds all your test files and runs them. Common ones are Vitest and Jest. Vitest is newer, faster, and works better with modern JavaScript projects, including Cloudflare Workers and Pages functions. Both give you output like:

✓ validates a real email address (2ms)
✓ rejects a fake email address (1ms)

Or, when something breaks:

✗ validates a real email address
  Expected: true
  Received: false

Why bother if your site already works?

Three reasons.

First, things break when you change them. Right now your code works because nothing has changed lately. The moment you add a feature, update a dependency, or refactor something, you’re flying blind. Tests catch regressions that you wouldn’t notice until a user reports them.

Second, tests make you understand your own code. When you sit down to write a test for validateEmail, you have to answer: what does this function actually do? What should it accept? What should it reject? Edge cases? That clarity is useful whether or not the tests ever catch a bug.

Third, tests document behavior. A test suite is a spec written in code. Six months from now, when you or someone else is trying to understand what parseTimeWindow is supposed to do, the tests answer that question directly and verifiably.

The trick: test the logic, not the infrastructure

Here’s where most people get stuck.

A Cloudflare Pages function or API handler does a lot of things at once. It reads from a request object, checks headers, calls the database, formats a response. Testing the whole handler means you’d need to simulate all of that. You’d need a fake request, a fake database, fake headers. It gets complicated fast.

The cleaner approach: pull the logic out and test that.

Most handlers contain functions that don’t actually need any of the infrastructure. A function that validates an email address doesn’t care about the request object. A function that parses a time window doesn’t need the database. These are called pure functions: given the same input, they always return the same output, with no side effects.

Before refactoring:

export async function onRequestPost(context) {
  const { email } = await context.request.json();

  // validation logic buried inside the handler
  const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
  if (!emailRegex.test(email)) {
    return new Response('Invalid email', { status: 400 });
  }
  // ...rest of handler
}

After extracting the pure function:

// functions/utils/validation.js
export function validateEmail(email) {
  const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
  return emailRegex.test(email);
}

// functions/api/register.js
import { validateEmail } from '../utils/validation.js';

export async function onRequestPost(context) {
  const { email } = await context.request.json();
  if (!validateEmail(email)) {
    return new Response('Invalid email', { status: 400 });
  }
  // ...rest of handler
}

Now validateEmail is trivial to test. The handler is also slightly cleaner as a side effect.

This is the only architectural move you need. Find the logic sitting inside your handlers, pull it out into separate utility files, and test those files.

The real example: 52 tests in 30 minutes

Here’s what this looked like on the C4C platform. The dashboard itself was built in one session. How to Build a Student Progress Dashboard covers the full build.

The codebase had 10 API handlers (registration, stripe webhooks, email sequences, auth checks) and a large auth script that ran in the browser. The handlers had no tests. The auth script had no tests.

We gave Claude Code this prompt:

Look at functions/utils/validation.js and functions/utils/parsers.js.
For each exported function, write a Vitest unit test.
Cover the happy path, the error cases, and any edge cases you can spot.
Save the tests to functions/utils/__tests__/validation.test.js
and functions/utils/__tests__/parsers.test.js

Claude Code read the files, identified every exported function, inferred the expected behavior from the code, and wrote tests for all of them. We reviewed the output, fixed two cases where it had guessed wrong about an edge case, and ran the suite.

52 tests. 297ms total.

The auth script needed a different approach. It was a browser-targeted file with DOM manipulation baked directly into the logic. You can test browser code, but you’d need a tool like jsdom to simulate the browser environment, and the setup cost wasn’t worth it for a first pass.

Instead, we mirrored the pure logic into a separate file: same functions, no DOM calls, testable context. The original file stayed untouched. The mirror file is what the tests run against.

The .gitignore gotcha: silently ignored files

This one is subtle and cost us time.

The project’s .gitignore used a whitelist pattern: instead of listing what to ignore, it listed what to include. That pattern looks like this:

# Ignore everything
*

# Except these
!src/
!src/**
!functions/
!functions/**
!package.json
!wrangler.toml

The wildcard * at the top means “ignore everything.” The ! lines carve out exceptions. Any file that doesn’t match an exception is silently ignored by git.

When we created functions/utils/__tests__/ and ran git add, nothing happened. No error. No warning. The files just didn’t show up in git status. Git was following the rule exactly: the __tests__ directory wasn’t in the whitelist, so it didn’t exist as far as git was concerned.

If you’re in a project with a whitelist .gitignore and new files aren’t showing up after git add, check the .gitignore file. You probably need to add an exception line:

!functions/utils/__tests__/
!functions/utils/__tests__/**

Or, if you want to allow any __tests__ directory anywhere in the project:

!**/__tests__/
!**/__tests__/**

Run git status after adding the exception. The files should appear.

How to get Claude Code to write your tests

Here’s a prompt you can adapt for your own project. Paste it into Claude Code after pointing it at the right files:

I want to add unit tests to this project using Vitest.

1. Look at [your file or directory here].
2. Identify all the pure functions: functions that take inputs
   and return outputs without reading from a database,
   touching the DOM, or calling external APIs.
3. For each one, write a test that covers:
   - A normal, expected input
   - An edge case (empty string, null, zero, a very long value)
   - An invalid input that should return false or throw
4. Save the tests to __tests__/[same-filename].test.js
5. Add Vitest to the project if it's not already there and
   show me how to run the tests.

Claude Code will do the heavy lifting. You’ll want to read the output and check whether the expected values actually match what the function should do. Claude Code can misread an edge case or infer behavior incorrectly from ambiguous code. A quick pass through the generated tests before running them saves debugging time later.

If you hit the .gitignore problem described above, that’s usually what’s happening. Tell Claude Code: “My new test files aren’t showing up in git status. Can you check the .gitignore?” It will find the pattern and fix it.

Summary

You don’t need a test suite to ship a project. But once something is running in production and you want to keep improving it without breaking it, tests give you a safety net that’s worth having.

The approach that works:

  1. Find the business logic buried inside your handlers.
  2. Pull it out into pure utility functions.
  3. Point Claude Code at those files and ask for Vitest tests.
  4. Check the generated tests, run them, fix the .gitignore if files go missing.

One session. 30 minutes. You’ll have a test suite that runs in under a second and tells you immediately when something breaks.

Further reading

Common Questions

What is a unit test?

A unit test is code that checks whether a specific function works correctly. It calls the function with known input and asserts the output matches expectations. Tests catch regressions (things that break when you change code) and document how functions are supposed to behave.

How do I get Claude Code to write tests for my project?

Point Claude Code at your source files and ask it to identify pure functions and write Vitest tests covering the happy path, edge cases, and invalid inputs. Review the generated tests before running them, since Claude may infer behavior incorrectly from ambiguous code.

What is a pure function and why does it matter for testing?

A pure function takes inputs and returns outputs without side effects (no database calls, no DOM manipulation, no API requests). Pure functions are trivially testable because you don’t need to simulate external systems. Extract logic from API handlers into pure utility functions.

Why aren’t my test files showing up in git?

If your project uses a whitelist-style .gitignore (starts with * and explicitly whitelists files), new directories like __tests__/ are silently ignored. Add !**/__tests__/ and !**/__tests__/** to your .gitignore to include test directories." 5→ }, 6→ { 7→ “type”: “text”, 8→ “text”: “agentId: acb40d4df9e30bba7 (for resuming to continue this agent’s work if needed) total_tokens: 115597 tool_uses: 27 duration_ms: 372792” 9→ } 10→]


A note from Alex: hi i’m alex - i run code for creatives. i’m a writer so i feel that it is important to say - i had claude write this piece based on my ideas and ramblings, voice notes, and teachings. the concepts were mine but the words themselves aren’t. i want to say that because its important for me to distinguish, as a writer, what is written ‘by me’ and what’s not. maybe that idea will seem insane and antiquated in a year, i’m not sure, but for now it helps me feel okay about putting stuff out there like this that a) i know is helpful and b) is not MY voice but exists within the umbrella of my business and work. If you have any thoughts or musings on this, i’d genuinely love to hear them - its an open question, all of this stuff, and my guess is as good as yours.

Ready to build this yourself?

Join the next cohort of Code for Creatives

Join the Next Cohort →