How I Approach Technical Debt in Small Projects
Technical debt is inevitable. Every codebase accumulates it.
The question isn't whether you'll have technical debt – it's which debt you pay down immediately and which you let accumulate.
I've built dozens of small projects over the years. Side projects, client MVPs, internal tools, proof-of-concepts. And I've learned that treating technical debt the same way in a small project as you would in a million-line enterprise codebase is a recipe for never shipping.
Small projects need a different approach. One that's pragmatic about trade-offs and ruthless about prioritization.
Here's what I've learned about managing technical debt when you're building something small and need to ship fast.
The Core Principle: Impact vs Effort
Every piece of technical debt has two dimensions that matter. The impact if you ignore it, and the effort to fix it.
High impact, low effort? Fix it immediately. Low impact, high effort? Ignore it completely. The interesting cases are the middle ground – high impact but high effort, or low impact but trivial to fix.
My rule is simple. If technical debt will cause a production incident, data loss, security vulnerability, or make future development significantly harder, I fix it before shipping. Everything else is negotiable.
This sounds obvious, but it's not how most developers think. We've been trained to obsess over code quality. Clean code, SOLID principles, design patterns, test coverage. These are all good things in the right context. But in a small project with tight deadlines, they can be distractions from shipping.
What I Ignore Completely
Let me be controversial. Here's the technical debt I deliberately create and feel zero guilt about in small projects.
I don't write tests for everything. In fact, I often ship with minimal tests or no tests at all. This makes developers uncomfortable. We're taught that untested code is unprofessional. But writing comprehensive tests for a feature that might get thrown away next week is waste.
I write tests for code that's likely to break and expensive to debug. Complex business logic with edge cases. Financial calculations. Authentication systems. But simple CRUD operations with straightforward validations? I'll manually test it and ship.
Once the feature proves valuable and sticky, I'll add tests. But not before. I've written beautiful test suites for features that got deleted two weeks later. That's time I'll never get back.
I don't obsess over DRY principles early. Duplication is often better than the wrong abstraction. If I'm writing similar code in two places, I leave it duplicated until I understand the pattern fully.
Premature abstraction is worse than duplication. Abstracting too early locks you into a design before you understand the problem. Then when requirements change, the abstraction fights you.
I'll happily copy-paste code between two controllers if they do similar things. After the third time I copy-paste, I'll consider abstracting. But not before. Three is the magic number where patterns become clear.
I don't normalize database schemas aggressively. Sometimes I store JSON in a column instead of creating proper join tables. Sometimes I denormalize data that could be calculated on the fly. Sometimes I use varchar when an enum would be "proper."
These aren't permanent decisions. They're intentional shortcuts. When the project grows and the data model stabilizes, I'll refactor. But early in a project when requirements change daily, a flexible schema beats a perfect one.
I don't set up comprehensive CI/CD pipelines immediately. For small projects, I'll manually deploy from my laptop. No automated testing. No staging environment. No blue-green deployments. Just git push and pray.
This is heresy in modern development. But setting up proper CI/CD takes hours or days. For a project that might get abandoned in a month, that's wasted effort. I'll add it when the project proves it needs it.
I don't worry about horizontal scaling from day one. Most small projects will never need it. I build for a single server. No load balancers. No read replicas. No sharding. Just a simple deployment that can handle thousands of users.
If the project grows to need scaling, that's a good problem. But optimizing for scale before you have users is premature optimization. Start simple, scale when needed.
I don't spend time on performance optimization unless there's an actual problem. Developers love to optimize. We'll spend hours making something 10% faster when it already runs in fifty milliseconds.
Unless users are complaining or metrics show a problem, I ship the straightforward solution. If it's fast enough, it's fast enough. I can always optimize later if needed.
What I Fix Immediately
Now here's where I'm strict. Some technical debt is never acceptable, even in small projects rushing to ship.
Security vulnerabilities get fixed before shipping. No exceptions. SQL injection possibilities. XSS vulnerabilities. Broken authentication. Missing authorization checks. Exposed secrets in code.
These aren't trade-offs. They're non-negotiable. A small project with a security vulnerability is still a liability. The project might be small, but the damage from a breach isn't.
I've seen developers skip proper authentication because "it's just an MVP." Then the MVP goes to production, gets traction, and suddenly there's real user data protected by authentication that was meant to be temporary. Don't do this.
Data loss scenarios get fixed immediately. Code that could corrupt data, lose user input, or delete things permanently without confirmation gets hardened before shipping.
I'm paranoid about data integrity. Users can forgive bugs. They can forgive slow performance. They won't forgive losing their data.
Any code path that modifies or deletes data gets extra scrutiny. Are there confirmations? Are there backups? Is there an undo mechanism? Can this fail in a way that leaves data in an inconsistent state?
These questions get answered before shipping, not after the first data loss incident.
Database indexes on frequently queried columns get added immediately. This one surprises people. Indexes feel like optimization, something you add later. But missing indexes are the number one cause of performance problems I've seen in production.
Adding an index later requires a migration and potentially downtime. Adding it from the start costs nothing and prevents pain later. If a column is in a where clause or used for sorting, it gets an index.
N+1 query problems get fixed before they reach production. This is another one that feels like premature optimization but isn't. N+1 queries don't just slow things down – they can completely kill a server under load.
I've seen production servers crash because someone shipped a controller action that loads users in a loop. It worked fine with ten test users. It exploded with ten thousand real users.
Rails makes this easy to catch. If I see ActiveRecord queries in a loop, I add eager loading. It takes thirty seconds and prevents production fires.
Input validation and sanitization happens everywhere, immediately. Never trust user input. Never trust data from external APIs. Never assume data is in the format you expect.
Sanitize on input. Validate types, lengths, formats. Reject invalid data explicitly. This prevents weird bugs down the line and security issues.
I've debugged too many production issues caused by unexpected input to skip this step. A user enters a negative number where you expected positive. An API returns null where you expected a string. These things happen. Handle them.
Error handling for external dependencies gets added upfront. APIs go down. Databases become unavailable. File systems fill up. External services rate limit you.
Your code needs to handle these failures gracefully. What happens when the payment API times out? When the email service returns an error? When the image upload fails?
I don't necessarily handle every edge case perfectly, but I at least acknowledge these can fail and don't let exceptions bubble up as 500 errors that crash the application.
Migrations that modify data in production get written carefully and tested thoroughly. I never wing a migration that updates existing records or changes column types.
These get tested on a production copy. They get run manually with careful observation. They include rollback plans. Data migrations gone wrong are incredibly painful to fix.
The Gray Area: Situational Decisions
Some technical debt is contextual. Whether I fix it depends on the specific project and constraints.
Code organization and architecture is one. For a hundred-line script, I'll write procedural spaghetti and feel fine about it. For a thousand-line application, I'll add some structure – maybe a few classes. For ten thousand lines, I need real architecture with patterns and organization.
The question is always: will this code need to be maintained and extended? If yes, invest in organization. If it's throwaway or highly experimental, don't bother.
Test coverage is another gray area. I mentioned I don't test everything, but how much is enough? It depends on consequences.
Code that handles money gets thorough tests. Code that could leak user data gets tests. Complex algorithms get tests. Simple display logic? Manual testing is fine.
The question is: what's the blast radius if this breaks? High blast radius means write tests. Low blast radius means ship and fix if it breaks.
Documentation falls into this category too. Internal tools for a team of three don't need extensive documentation. A library other developers will use needs good docs. A client project needs enough documentation that they can maintain it.
I write docs when the audience needs them, not by default.
Performance optimization is contextual. If the feature is user-facing and on the critical path, I'll optimize. If it's a background job that runs once an hour, I'll ship the straightforward solution even if it's inefficient.
The question is: does performance directly impact user experience? If yes, optimize. If no, optimize only if it becomes a problem.
How This Plays Out in Practice
Let me give you a real example from a recent project. I built an internal tool for managing server deployments. Small Rails app, maybe fifteen hundred lines of code, used by a team of five developers.
Here's what I shipped with knowingly imperfect:
No automated tests. The app is basically CRUD with some shell command execution. I manually tested the happy paths and shipped. The team uses it daily and reports bugs if something breaks. In six months of use, we've had maybe three bugs. Writing a test suite would've taken longer than fixing those bugs.
Simple authentication with Devise defaults. No two-factor auth, no advanced password requirements, no OAuth. Just email and password. It's an internal tool behind a firewall. The security risk is low. Adding complexity wasn't worth it.
No API versioning. The app has an API that a few scripts call. I didn't version it or add formal documentation. The consumers are all internal. If we need to change the API, we just update the consumers. The overhead of proper API versioning wasn't justified.
Denormalized data in some places. Deployment logs are stored as JSON blobs instead of properly normalized tables. This makes queries less flexible but simplifies the schema. For this use case, it's fine.
Manual deployments. I push to main, SSH into the server, pull, restart. No CI/CD. No automated testing. For a tool used by five people with deployments maybe twice a month, automation would be overkill.
Here's what I made sure was solid before shipping:
Proper authorization checks. Each user can only see and manage their own deployments. Authorization is checked on every action. This had to be right because one person's deployment commands shouldn't affect another's.
Input validation on shell commands. The app executes shell commands based on user input. Every input is strictly validated and sanitized. No room for command injection. This is a security boundary that had to be perfect.
Database indexes on key columns. The app queries deployments by user, by date, by status. These columns all have indexes. Queries are fast even with thousands of deployment records.
Error handling around shell command execution. Commands can fail. Processes can time out. The app handles these gracefully with appropriate error messages and doesn't leave deployments in weird states.
Audit logging. Every deployment action is logged with timestamp, user, and result. If something goes wrong, we can trace what happened. This was non-negotiable for an ops tool.
The result is a tool that shipped in a week instead of a month, works reliably, and has required minimal maintenance. The technical debt I intentionally took on hasn't caused problems. The areas I made sure were solid have prevented problems.
Knowing When to Pay Down Debt
Technical debt isn't meant to live forever. The whole point is borrowing time now to pay later. But when is later?
I pay down technical debt when it starts causing pain. When I spend more time working around it than it would take to fix it. When bugs cluster around it. When every new feature requires fighting the debt.
The test suite is a good example. At some point, manually testing every deployment before shipping becomes tedious. That's when I add automated tests. Not on day one, but when the pain of not having them exceeds the effort to write them.
Code organization is similar. When files get too large to navigate easily, I refactor. When I'm copy-pasting too much code, I abstract. When new developers struggle to understand the codebase, I document.
The debt itself tells you when to pay it down. If it's not causing problems, leave it. If it's slowing you down, fix it.
The Discipline Required
This approach requires discipline. Not the discipline to maintain perfect code quality – that's easy for developers. The hard discipline is knowing what to ignore.
We're trained to write clean code. To follow best practices. To do things "the right way." Intentionally writing imperfect code feels wrong. But in small projects with tight constraints, perfection is the enemy of shipping.
The discipline is in making conscious trade-offs. Not writing sloppy code because you're lazy, but deliberately choosing where to cut corners and where to be strict.
It's also discipline in paying down debt when it matters. Technical debt isn't free. You're borrowing future time. At some point, you pay interest. The key is paying it down before the interest compounds out of control.
What This Isn't
This isn't an excuse for sloppy code. There's a difference between strategic technical debt and just not caring.
Strategic debt is intentional. You know you're taking a shortcut. You know why. You know what the consequences are. You've decided the trade-off is worth it.
Sloppy code is accidental. You wrote it quickly without thinking. You don't know what corners you cut. You haven't considered consequences. You're surprised when it breaks.
Strategic debt is documented. Even if it's just a comment saying "TODO: this needs proper error handling when we have time." You've acknowledged the debt exists.
Sloppy code is invisible debt. No one knows it's there until it explodes.
Strategic debt has a plan to pay it down. Maybe not a timeline, but awareness that this is temporary. Sloppy code assumes someone else will deal with it someday.
The difference matters. One is professional pragmatism. The other is unprofessional negligence.
The Result
Small projects with this approach ship faster and still remain maintainable. They're not perfect. They have warts and shortcuts. But they work. They solve the problem they're meant to solve. And they can be improved incrementally as needs grow.
Contrast this with the perfectionist approach. Writing comprehensive tests before any code ships. Agonizing over architecture for a feature that might change next week. Building elaborate systems for scale you don't have. The result is projects that take months to ship, if they ship at all.
Perfect is the enemy of good. In small projects, good enough is usually good enough. Ship something that works. Learn from real usage. Improve based on actual needs, not hypothetical futures.
That's how you manage technical debt in small projects. Not by avoiding it completely – that's impossible. But by being strategic about which debt you take on and disciplined about when you pay it down.
Ship fast. Ship working software. Fix what matters. Ignore what doesn't. That's the pragmatic approach that actually delivers value.