Series A funding: why 85% of startups never get it
85% of seed startups never reach Series A. The ones that do share three things — revenue, retention, and architecture. Not a pitch deck.
A founder we worked with had the term sheet in hand. $8M Series A, $45M pre-money. Six months of pitching, 47 investor meetings, two competing offers. Then the lead investor’s engineering team opened the repository.
No automated tests. Infrastructure running on a single managed instance with no failover. One developer who’d written 94% of the codebase and held all the context in his head. A PostgreSQL database with no indexes on tables that had grown to 12 million rows.
The wire never came. The investor’s DD report flagged the product as “un-investable in current state” and estimated $200K-$400K in remediation before the codebase could support a team of 8 engineers post-funding. The competing offer pulled out a week later – they’d heard through the grapevine. Six months of fundraising, gone.
He came to us after. We rebuilt the architecture, added test coverage, migrated to managed infrastructure, and documented everything. He raised his Series A four months later – at a lower valuation. The delay cost him $3M in dilution.
This article is about what that founder should have built before the first investor meeting.
The Series A market in 2026
The numbers are brutal and getting worse.
| Metric | 2021 | 2025 | Change |
|---|---|---|---|
| Median round size | $10M | $7.9M | -21% |
| Median pre-money valuation | $40M | $49M | +23% |
| Seed → Series A conversion | 31% | 15% | -52% |
| Median time seed → Series A | 14 months | 26 months | +86% |
| Deal volume (YoY) | Growing | -18% | Contracting |
Valuations are higher but fewer companies reach them. It takes nearly twice as long to get there. And the bar for what investors fund has shifted from momentum to proof: prove the revenue sticks, prove the product scales, prove the team executes without you [2][3].
The companies that make it through share three things – and none of them involve a slide deck.
Revenue that compounds without you
Every Series A guide tells you to hit $1M-$3M in ARR. That’s table stakes. The metric investors actually model is net revenue retention – whether your existing customers spend more over time without you acquiring anyone new.
SaaS Capital’s annual retention benchmark report puts top-quartile NRR at 118-120% for companies with ACVs above $100K [5]. The companies that command the highest multiples at IPO – CrowdStrike (139% NRR), PagerDuty (139%), Twilio (137%) – were all well above that threshold years before going public [5].
Here’s the math that makes investors care. Take two startups:
Startup A: $1.2M ARR, 4% monthly churn, minimal expansion. After 12 months, they’ve lost $460K of their starting revenue to churn (1 - 0.96^12 = 39% annual erosion). NRR: 61%. Even if they add $500K in new sales, net ARR barely grows. The investor models this forward and sees a treadmill.
Startup B: $800K ARR, 1.5% monthly churn, strong upsells. They start the year with $800K. Churn costs them $132K (1 - 0.985^12 = 16.6%). But seat expansions and tier upgrades from those same customers add $200K. NRR: ($800K - $132K + $200K) / $800K = 108.5%. Revenue grows from existing customers alone. Add new sales on top and the curve compounds.
Investors model your annual recurring revenue trajectory over 3 years. The Startup A curve flattens no matter how much you spend on acquisition. The Startup B curve accelerates. That’s why a founder at $800K ARR with 108% NRR gets a term sheet before a founder at $1.2M ARR with 61% NRR. We break down the unit economics in our customer lifetime value guide.
What drives NRR mechanically: This is where product architecture meets revenue. Event-driven usage tracking lets you surface expansion triggers automatically – “You’ve hit 80% of your plan’s API limit” converts better than any sales email. Self-serve tier upgrades with one-click billing changes eliminate the friction that kills expansion revenue. Metered billing infrastructure that charges for the feature most correlated with customer success (API calls, team seats, storage) grows revenue proportionally to the value your customers extract. These aren’t marketing tactics – they’re engineering decisions that compound into the NRR number investors model.
The diagnostic: Calculate your NRR this week. Take the ARR from customers who were active 12 months ago. Add what those same customers pay now (upgrades, added seats, new products). Divide by the starting number. Below 100% means your existing revenue is shrinking – you’re filling a leaky bucket. 100-110% is surviving. Above 120%, investors will compete for your round.
A codebase that survives strangers
One analysis of 70 venture-backed startups found a pattern that matches what we’ve seen across our own clients: the startups with the most technical debt and the highest development velocity had the best funding outcomes – a 60.6% success rate. The “sustainable growth” companies with the cleanest code? 44.4% – the lowest in the dataset [6]. The sample is small and the methodology limited, but the direction is consistent with what Martin Fowler has observed: “almost all the successful microservice stories have started with a monolith that got too big” – speed first, cleanup later.
The explanation is simple. VCs don’t audit codebases at seed stage. What they can observe is momentum: feature launches, user growth, commit velocity, product iteration speed. A startup that ships every day with messy code looks better than one that ships monthly with clean code.
But at Series A, the calculus flips.
Series A investors send engineering teams to audit your codebase. And what kills deals isn’t the kind of technical debt you’d find in a code review – sloppy variable names, missing comments, duplicated logic. That stuff is cheap to fix.
What kills deals is architecture debt [7]. The difference matters. Technical debt is a $10K cleanup sprint. Architecture debt is a 6-month rewrite that halts feature development.
Five architecture problems that fail due diligence:
-
No recovery procedure and no tested backups. Your database is on a single instance with no documented failover plan. Backups are configured but never tested – and when someone actually tries to restore, the backup is corrupted or months old. Multi-region isn’t the bar at Series A. But having a tested recovery procedure with a documented RTO under 4 hours is.
-
One developer holds all context. The “bus factor” question isn’t hypothetical. If your lead developer wrote 80%+ of the codebase, investors know that person’s departure means months of productivity loss. Y Combinator’s Series A checklist explicitly requires this not to be the case [8].
-
No automated deployment pipeline. If a developer can push directly to production without tests running, code review, or staging verification, the investor’s engineering team flags it immediately. Not because it’s inefficient – because it means every deploy is a coin flip.
-
Database design that doesn’t scale. No indexes on high-volume tables. No read replicas. Queries that do full table scans on millions of rows. A fintech startup we know had a conditional $3M investment. Technical DD revealed their database couldn’t handle more than a few hundred concurrent users. Result: 6-month delay and 20% valuation reduction while they rebuilt [9].
-
Copyleft code in proprietary products. Cisco acquired Linksys for $500M and then discovered GPL-licensed components. The Free Software Foundation brought a copyright infringement action, forcing Cisco to release proprietary source code under an open license [10]. At Series A scale, the stakes are smaller but the mechanism is identical. Undisclosed AGPL or GPL usage can collapse negotiations [11].
Technical due diligence: the checklist
Not every Series A investor runs a deep technical audit – some non-technical GPs skip it entirely. But the acquirer at Series B or exit will find everything. And the discipline of being DD-ready – documented architecture, tested deployments, distributed ownership – correlates directly with execution speed regardless of whether an investor ever opens the repo.
This is the framework we use when building products for founders heading toward a raise – and it’s what the investor’s engineering team checks when they open your repo.
Code quality and architecture
| What they check | Pass | Fail |
|---|---|---|
| Test coverage | Integration tests on auth, payments, and core business logic | 0-15% coverage, “we test manually” |
| Architecture patterns | Clear separation of concerns, consistent patterns | 4 different patterns in 4 services, no boundaries |
| Dependency management | Pinned versions, automated security scanning | 200+ unpinned deps, 14 critical vulnerabilities |
| Technical debt tracking | Known debt in issues with estimated remediation cost | ”We’ll fix it after the round” (you won’t) |
| Code ownership | Multiple contributors, no single-person dependencies | One developer wrote 80%+ of the codebase |
Infrastructure and operations
- Deployment pipeline: Automated CI/CD that runs tests before every deploy. Code review required on production merges.
- Monitoring: APM, error tracking, uptime monitoring – configured and alerting, not just installed.
- Disaster recovery: Database backups tested (not just configured). Documented recovery procedure. Tested RTO under 4 hours. Investors have seen backups that look fine in the dashboard but fail on restore.
- Scalability evidence: Load test results showing the system handles 5-10x current peak traffic. Not theoretical – actual test runs with documented results.
Security posture
- HTTPS everywhere, including internal services
- Authentication with proper session management (not JWT tokens that never expire)
- Secrets management (no API keys in the repo – and yes, they check git history)
- Input validation and SQL injection protection verified
- GDPR/SOC 2 compliance roadmap if you serve enterprise customers
A team that ships without its founder
Y Combinator’s Series A checklist puts it directly: the team should be “shipping updates every couple of days” [8]. Speed of execution is the signal. If you’re deploying once a month, investors assume the codebase is fragile or the team is stuck.
But execution velocity isn’t just about committing code. It’s about having the infrastructure to move fast without breaking things. Here’s what a 4-person engineering team that passes DD looks like versus one that doesn’t:
Passes DD: Two backend engineers, each owning distinct modules with clear API boundaries. One frontend engineer. One engineer splitting time between DevOps and backend. Code review required from at least one other engineer before any production merge. Any engineer can deploy. Architecture decisions documented in 5 one-page ADRs. The lead developer could take a two-week vacation without blocking a release.
Fails DD: One senior developer who wrote 80% of the code, two junior developers who work on features the senior architect reviews, and a contractor who left three months ago. No one besides the senior dev can deploy. The last time someone else tried, they broke production because the deploy process was undocumented and involved 6 manual steps.
The difference between these teams isn’t talent – it’s structure. And structure is something you can fix in 90 days.
The playbook below is for teams that need to fix it. If your software development costs have never been tracked against technical debt, start here.
The objection we hear: “We don’t have 90 days. We’re burning $80K/month and need to close before Q3.” Fair. But the alternative is entering DD unprepared, getting flagged, and spending those 90 days anyway – except now you’re negotiating from weakness with a term sheet that has remediation conditions attached. Every founder we’ve seen try to “fix it during DD” ended up with a lower valuation than if they’d delayed the raise by one quarter and fixed it first.
Days 1-14: Run your own DD before investors do.
Concrete steps:
- Run
npx jest --coverage(or your test runner’s equivalent). Write down the number as a baseline – but more importantly, check which critical paths have zero test coverage. Auth, payments, and core business logic with no tests are the DD red flags. - Count critical vulnerabilities:
npm auditorsnyk test. Any critical or high severity findings need a remediation plan. - Document your architecture in one page: services, databases, third-party dependencies, data flow. If this takes more than a page, your architecture is too complex for your team size.
- List every piece of tribal knowledge that isn’t written down. If only one person knows how to deploy, that’s DD failure #2 from the list above.
- Run
git log --format='%aN' | sort | uniq -c | sort -rnto see commit distribution. If one name accounts for 80%+, you have a bus factor problem that investors will find.
Days 15-45: Fix the top 5 DD-failing issues.
In order of impact:
- Write integration tests for every auth flow, payment flow, and core business operation. Measure coverage to track progress, but the goal is tested critical paths, not a coverage number. A codebase with 30% coverage and solid integration tests on payments is safer than 80% coverage with no meaningful assertions.
- Set up CI/CD. Every push runs tests. Every production deploy requires a passing build and code review. Basic test-on-push takes 1-2 days including debugging the inevitable environment differences. A production-grade pipeline with staging deploys, environment parity, and secret management takes a week.
- Set up a staging environment. If you’re deploying straight to production, fix this first. One environment on the same infrastructure, with a separate database seeded with anonymized production data.
- Write architecture decision records. Five documents, one page each: why you chose your database, your hosting provider, your framework, your auth approach, your payment processor. These answer 80% of the questions an investor’s engineering team will ask.
- Fix the bus factor. Pair programming sessions where the lead developer walks a second engineer through every critical system. Record these. Write onboarding docs that let a new engineer ship code in their first week.
This sprint costs $15,000-$30,000 if you bring in outside help. On a $7.9M round, that’s a rounding error.
Days 46-75: The stress test.
Use k6 (open-source, scriptable, better for API-heavy products) against your staging environment. Not your production database – a staging replica with production-scale data. k6 handles API and server-side load well; for browser-based user flows (login sequences, checkout flows), pair it with Playwright or Cypress for end-to-end coverage.
What to test and what passing looks like:
- p95 response time under 500ms at 10x current peak traffic. If your current peak is 50 concurrent users, test at 500. Record response time percentiles, error rates, and database query times.
- No errors under 5x load. Errors at 10x are acceptable if they degrade gracefully (slower responses, not 500s). Errors at 5x mean you have a scaling problem that investors will find.
- Database query time under 100ms at p95. If queries slow down before HTTP responses do, you have missing indexes or inefficient queries. This is the most common performance bottleneck and the cheapest to fix.
Document results with charts. Investors’ engineering teams want to see a load test report, not hear “we think it can handle 10x.”
Then test disaster recovery. Actually restore from backup. Verify row counts match production. Run the app against the restored database and confirm core flows work (login, create object, process payment). The backup that “looks fine in the dashboard” fails more often than you’d expect.
Days 76-90: The dry run.
Have an engineering team audit your codebase as if they were the investor’s engineers. Give them the same access an investor would get: repository access, infrastructure dashboards, architecture docs. Ask for a written report with findings categorized as critical, high, and medium. Fix the criticals. Document the highs with remediation timelines. This is the feedback you want now – not during real DD when millions are on the table.
What to have in the data room
Before your first investor meeting, these artifacts should exist:
- Technical architecture document – one page showing services, databases, third-party dependencies, and data flow. Not 14 pages of boxes and arrows.
- Load test results – peak traffic capacity with headroom documented
- Security audit report – findings and remediation status
- Test coverage report – automated, pulled from CI on every build
- Infrastructure cost model – current spend and projected spend at 5x and 10x users
- Team topology – who owns what, bus factor per system, hiring plan for post-funding
These don’t go in your pitch deck. They go in the data room. Investors who see a prepared data room move faster – due diligence compresses from weeks to days because the technical documentation is already organized.
The best founders we’ve worked with walk into the partner meeting and say: “Here’s our architecture. Here are our load test results showing we handle 10x current peak traffic. Here’s our security audit from last month. Your engineering team can start the review today.” That confidence changes the power dynamic.
Two founders, two outcomes
The founder in our opening story lost $3M in dilution and four months of momentum because his codebase wasn’t ready for the audit.
Another founder we worked with took a different path. She came to us at seed stage, before writing the first line of code. We built her product with modular architecture, CI/CD from day one, integration tests on every payment flow, and documented architecture decisions. Total additional cost over a “move fast, fix later” approach: about $8,000 on a $55,000 build.
Fourteen months later, she entered DD for a $6M Series A. The investor’s engineering team spent two days – not two weeks – reviewing the codebase. No critical findings. No remediation demands. The term sheet held. The wire came on schedule.
The math: $8,000 invested upfront versus $200,000+ in remediation, months of delay, and dilution from a lower valuation. Building it right the first time isn’t more expensive. Building it wrong and fixing it under pressure is.
The mechanics of how dilution actually works – liquidation preferences, participation rights, the math that determines what your equity is worth at exit – are in our venture capital guide. Most founders don’t run those numbers until the term sheet arrives. Run them now. If you haven’t raised a seed round yet, our seed funding guide covers the technical bar at that stage. The habits you build at seed compound into Series A readiness.
We build software for startups heading toward a raise. Not audits – the actual product. Tell us what stage you’re at and we’ll tell you what needs to be built.
References
[1] Carta, “State of Private Markets Q3 2024,” Oct. 2024. carta.com
[2] Carta, “State of Private Markets Q1 2025,” Apr. 2025. carta.com
[3] Carta, “Series A Funding Slides in Q2 2025,” Jul. 2025. carta.com
[4] Carta, “Five charts showing how AI is dominating the venture fundraising market,” 2024; Digital Commerce 360, “AI ecommerce startups maintain valuation premium in 2025,” Dec. 2025.
[5] SaaS Capital, “2023 B2B SaaS Retention Benchmarks,” May 2023. saas-capital.com; CrowdStrike/PagerDuty NRR figures from S-1 filings.
[6] ByteVagabond, “I Analyzed 70 Startups’ Codebases – The Ones With More Technical Debt Raised More Money,” 2024. bytevagabond.com
[7] The New Stack, “Technical Debt vs. Architecture Debt: Don’t Confuse Them,” 2024. thenewstack.io
[8] Y Combinator, “Series A diligence checklist,” ycombinator.com/library
[9] Ostride Labs, “Technical Due Diligence: What Investors Really Look for in Your Startup’s Tech Stack,” 2024. ostridelabs.com
[10] Morse Law, “Open Source Issues in Mergers & Acquisitions,” 2023. morse.law
[11] MindCTO, “The Copyleft Threat: How AGPL License Risk Can Destroy Your Startup’s Valuation,” 2024. mindcto.com
Frequently asked questions
How much revenue do you need for Series A?
Most Series A rounds in 2026 require $1M-$3M in ARR with consistent month-over-month growth. But revenue alone doesn't close the deal – investors evaluate net revenue retention, unit economics, and whether the product can scale to 10x load without a rewrite.
How long does it take to raise a Series A?
The median time from seed to Series A is now 26 months – up from 14 months in 2021. The fundraising process itself takes 3-6 months. Start preparing 90 days before your first investor meeting.
What kills a Series A deal in due diligence?
Architecture that can't scale without a rewrite, single-developer dependency where one person holds all context, and GPL-licensed code in proprietary products. We've seen term sheets evaporate over an exposed S3 bucket found in git history.
Your next round depends on what's in the repo.
We build software for startups raising Series A. Modular architecture, automated tests, scalable infrastructure – the technical foundation that compresses due diligence from weeks to days.
Or leave your details — we'll reach out within 24h.