The temptation when facing a legacy codebase is to scrap it and start fresh. Don't. Full rewrites are the number one cause of failed software projects. Here's how to modernise incrementally, reducing risk while delivering value continuously.
Why Full Rewrites Fail
The legacy system has years of bug fixes, edge case handling, and domain knowledge baked in. A rewrite throws all of this away. The original developers have moved on, and nobody fully understands why certain things work the way they do. During the rewrite, someone will inevitably say "let's also redesign the workflow while we're at it" and suddenly you're not just rebuilding the technology — you're redesigning the business process too.
Meanwhile, the old system still needs maintaining, so you're paying for two systems at once. Bug fixes and feature requests don't stop just because you've started a rewrite. Your team is now split between keeping the old system running and building the new one, and neither gets enough attention.
The timeline always expands. Full rewrites are typically estimated at 6 to 12 months and actually take 18 to 36 months. The complexity that made the old system messy doesn't disappear in the new one — it resurfaces in different ways as the team rediscovers edge cases, integration requirements, and business rules that weren't documented anywhere except in the old code. By the time the rewrite is ready, the requirements have drifted and the new system doesn't match what users actually need.
I've seen this pattern repeat across three councils and multiple private sector organisations. The projects that succeeded were always the ones that modernised incrementally. The ones that attempted big-bang rewrites either failed completely, were dramatically descoped, or delivered years late and over budget.
The Strangler Fig Pattern
Named after tropical fig trees that gradually envelop their host, this pattern replaces legacy components one at a time. New functionality is built in the modern stack. Old functionality is migrated piece by piece. A routing layer directs traffic to either the old or new system depending on which component has been migrated. Eventually, the old system has nothing left and can be switched off.
The beauty of this approach is that it's reversible at every step. If a migrated component has issues, you can route traffic back to the old version while you fix it. There's never a point where the entire system is in an uncertain state. Users continue working throughout the process, and each migration step delivers measurable improvement.
The pattern works regardless of your starting technology. I've used it to migrate ASP.NET Web Forms to Razor Pages, VB.NET desktop apps to web applications, Classic ASP to ASP.NET Core, and monolithic .NET Framework services to smaller .NET 8 microservices. The implementation details vary, but the principle is the same: replace incrementally, validate at each step, and never break the production system.
Where to Start
Don't start with the most complex component. Start with something small, well-understood, and low-risk. A settings page. A simple report. A lookup API. This lets you establish the new architecture, set up CI/CD, configure the hosting environment, and prove the approach works — all without risking anything critical.
The first component you migrate is actually the hardest, not because of the code itself, but because you're establishing the entire new infrastructure at the same time. You need to set up the new project structure, configure build pipelines, establish deployment processes, and get the routing layer working. Once this foundation is in place, subsequent migrations are significantly faster because the infrastructure already exists.
After the first component, prioritise by business value and risk. Components that change frequently are good candidates because you'll get the most benefit from modern tooling. Components with security vulnerabilities that are hard to patch in the old framework should be prioritised for safety. Components that are causing performance problems are often dramatically improved by modern .NET's performance characteristics.
Avoid the temptation to migrate components that nobody uses or rarely changes. The point of incremental modernisation is to deliver value continuously. Migrating a report that runs once a quarter delivers less value than migrating the API that handles every user interaction.
The API Boundary
Put an API layer between the old and new systems. The new frontend calls the API. The API talks to both old and new backends depending on which component has been migrated. This gives you a clean separation point and lets you migrate at whatever pace works for your team.
The API boundary also provides an integration point for testing. You can write integration tests against the API that verify the same inputs produce the same outputs regardless of whether they're handled by the old or new system. This automated verification gives you confidence that each migration step preserves existing behaviour.
In practice, I typically implement the API boundary as a thin ASP.NET Core Web API project that sits in front of both systems. For migrated components, it routes requests to the new code directly. For components that haven't been migrated yet, it proxies requests to the old system. The routing logic can be as simple as a feature flag per endpoint — flip the flag when the new component is ready, flip it back if there's a problem.
This pattern also enables gradual rollout. Instead of switching all traffic to the new component at once, you can route a percentage of requests to the new system and monitor for errors before increasing the proportion. This is particularly valuable for high-traffic components where the risk of a subtle bug affecting many users is highest.
Dealing With the Database
The database is usually the hardest part of any modernisation project. Legacy systems often have tightly coupled stored procedures, triggers, views, and direct table access from multiple applications. Changing the database schema risks breaking systems you didn't even know depended on it.
My approach is to keep the existing database initially and build new services that read from it. This means the old and new systems share the same data without needing data migration or synchronisation. Gradually introduce new tables or schemas for migrated components that need different data structures. Use database views to provide backwards compatibility — the old system continues reading from what it thinks are tables, but they're actually views on the new schema.
For stored procedures, I migrate them to application code as part of the modernisation. Stored procedures are difficult to test, difficult to version control, and create tight coupling between the application and the database. Moving business logic into the application layer (where it belongs) makes the system more testable, more portable, and easier to understand.
Eventually, once the old system is fully replaced, you can consolidate the database — removing the backwards-compatible views, dropping unused tables, and cleaning up the schema. But this is the last step, not the first. Premature database migration is the leading cause of failed modernisation projects because it forces a big-bang cutover at the data layer even if the application layer is being migrated incrementally.
Measuring Progress
Track what percentage of traffic goes through the new system versus the old. This gives you a clear, objective measure of migration progress that stakeholders can understand. Set realistic milestones — 25, 50, 75 percent — and celebrate each one.
Also track the more subjective measures: how long it takes to implement a new feature in the migrated components versus the legacy components, how many production incidents originate from each system, and how developer satisfaction changes over time. These metrics help justify continued investment in modernisation when stakeholders question whether the effort is worthwhile.
I maintain a migration tracker document for every project — a simple spreadsheet listing every component, its current status (legacy, in-progress, migrated), the estimated effort to migrate, and any dependencies or blockers. This gives the project manager visibility into what's been done, what's in flight, and what's still ahead. It also helps with planning: if a component has dependencies on two others that haven't been migrated yet, you know to schedule those first.
When to Stop
Not every component needs migrating. If a legacy component is stable, rarely changes, and doesn't cause operational problems, the cost of migrating it may exceed the benefit. It's perfectly acceptable to have a modern system that still proxies a handful of requests to a legacy backend for rarely-used functionality. The goal is to eliminate risk and improve developer productivity, not to achieve architectural purity.
I've seen teams spend months migrating admin screens that three people use once a month. That time would have been better spent improving the high-traffic components that affect every user. Be ruthless about prioritisation and don't let perfectionism drive the migration scope.
Need Help?
I specialise in incremental legacy modernisation for .NET applications. If you're facing a legacy system that's holding your team back and want to explore modernisation options, get in touch for a free 30-minute discovery call. I'll give you an honest assessment of what's involved and whether the strangler fig approach is right for your situation.