Most organisations with a legacy application face the same uncomfortable moment: the product still works, users still rely on it, and it cost a significant amount to build. But it now looks and feels like it belongs to a different decade. At the same time,competitors are shipping polished interfaces, new hires are frustrated on day one, and someone in the boardroom is asking whether it's time to start again.
According to the Lünendonk Study 2024, 43% of companies rank application landscape modernisation as a top priority, and 63% place seamless UX and user centricity at the heart of that effort. What the study also shows is that most organisations are not starting with a full rebuild. They are finding ways to improve the experience within what they already have.
The right first question is not "do we need to rebuild?" It is "what can we improve safely and meaningfully within the current system?"
Three things this article will help you understand:
What UX/UI improvements are realistically possible within a legacy application, and what the constraints actually are
How to run an audit that produces a prioritised roadmap rather than a list of complaints
How to manage rollout without alienating the experienced users your product depends on
The first thing to establish is that "legacy" does not mean "unmovable." A lot of what feels fixed in an older application is actually a product of accumulated assumptions rather than genuine technical constraints. Nobody challenged the navigation structure because it was always that way. The visual hierarchy was never revisited because the original build team moved on. The onboarding flow is confusing because features were bolted on over time without anyone stepping back to look at the whole.
A structured audit separates the things that are genuinely constrained by the architecture from the things that are simply unchallenged convention.
ICF's work modernising a large federal legacy system is a useful reference point. Through user research and interface audits, their team identified dozens of improvement opportunities without discarding the core application. They updated the front-end technology, introduced a 17-component design system for consistency, and significantly improved usability, all without a ground-up rebuild.
A practical way to frame the scope of what's possible is to sort every issue into one of three categories:
Preserve: cover what is already working well, such as the things users rely on and have internalised, like core data models, established workflows, and familiar navigation patterns. Touching these without good reason creates disruption without benefit.
Improve: this is where most of the practical work lives: elements that create friction but can be fixed within your current constraints, including visual clarity, information hierarchy, form design, error messaging, accessibility, and consistency.Â
Replace: is reserved for what is genuinely blocked by the architecture, such as interactions that depend on deprecated technology or features that simply cannot render on modern devices. These require deeper work and a different level of commitment.
Most legacy applications have far more in the "improve" column than their owners expect. The audit exists to prove that, not assume it.
The most common mistake at this stage is starting with design concepts. It feels productive, but it skips the discovery work that determines whether those concepts are solving the right problems. A good UX audit begins with understanding, not with Figma.
Here is how we approach it:
1. User and stakeholder interviews
Start by talking to the people who use the system every day, not just the people who commissioned it. You need to understand which tasks are critical, which workflows have workarounds baked in, and where users are losing time or making errors. ICF conducted 12 semi-structured interviews before touching a single interface, and that research shaped every subsequent decision. The goal is to build a clear picture of what users actually need from the system, not what the original spec assumed they would need.
2. Three-angle audit
Assess the product from three directions simultaneously: user experience (what is confusing, slow, or error-prone), technical feasibility (what the current stack can and cannot support), and business impact (which friction points are costing the most in support tickets, training time, or lost efficiency). Without all three angles, you end up with a wishlist rather than a roadmap.
3. Prioritisation by frequency and frustration
Not all problems are equal. A practical way to prioritise is to score each issue by how often users encounter it multiplied by how much friction it creates. High-frequency, high-frustration issues sit at the top of the backlog. These are the changes that deliver the fastest visible improvement and build internal confidence in the programme.
4. A structured output, not a slide deck of observations
The audit should produce three things: a friction map showing where the experience breaks down, a prioritised opportunity backlog, and a phased roadmap with clear dependencies. Anything less than that is analysis without direction.
The earliest wins from a phased UX modernisation programme are almost always operational rather than cosmetic. Before users notice the product looks better, they notice it works better. That distinction matters when you are building the internal case for continued investment.
Operational outcomes to expect in early phases:
Fewer support tickets as clearer interfaces reduce user error and confusion
Faster onboarding for new users as workflows become more intuitive
Reduced training time, often significantly: research across enterprise applications suggests intuitive interfaces can cut training requirements by as much as 75%
Lower error rates on high-stakes tasks where the interface previously created ambiguity
Improved adoption of features that existed but were too hard to find or use
The perception shift follows. A product that works more smoothly feels more modern, even before any visual refresh. For organisations going through a rebrand or trying to reposition in a competitive market, this matters: the interface is part of how the product is judged.
Impact of a UX improvement requires looking at the right signals. Metrics worth tracking from day one:
Time on task is the most direct indicator of whether key workflows are actually getting faster.Â
Error rate tells you whether the interface is introducing avoidable mistakes into users' work.Â
Support ticket volume connects UX quality to a real operational cost, if the interface is confusing, your support team pays for it.Â
Feature adoption rate reveals whether the improvements you shipped are being used at all, which is the most honest test of whether a change landed.Â
User confidence, captured qualitatively, picks up the sentiment shifts that the numbers on their own will miss.
Atlassian's Jira redesign, for example, produced a 34% improvement in team productivity and a 21% reduction in onboarding time. Intuit's investment in QuickBooks Enterprise UX resulted in 37% fewer support tickets and 24% higher feature adoption. Neither required a complete rebuild.
This is where many modernisation programmes quietly fail. The design work is solid, the improvements are real, but the rollout is handled as a release rather than a change programme. Users who have spent years mastering the old interface feel blindsided. Workarounds they depended on disappear. Productivity drops temporarily and resistance hardens.
The data here is sobering. Around 60 to 70% of change initiatives fail, most often due to employee resistance and lack of management support. Separately, 70% of software implementations fail due to poor user adoption, with 45% of employees reporting that new software arrives without adequate training, and 63% saying they stop using new technology if they cannot see its relevance or get help when they need it.
The good news is that these failures are almost entirely avoidable with the right approach to rollout.
Your most proficient users hold knowledge that no specification document contains. They know the edge cases, the workarounds, the workflows that look simple but are not. Involving them early, in research, in prototype testing, and in pilot groups, turns potential resistors into advocates. It also surfaces dependencies you would otherwise discover at the worst possible moment.
Not every part of the application carries the same risk. Start with lower-stakes areas where improvements are visible and the cost of a misstep is low. Build confidence before tackling the workflows that experienced users have internalised most deeply.
Users are far more receptive to change when they understand what is not changing. Be explicit: this workflow is staying the same, this navigation pattern is staying the same, this is what is different and why it helps you. Vague announcements about "improvements" create anxiety. Specific communication creates trust.
If you are not tracking adoption after rollout, you will not know there is a problem until it has already become one. Set baseline metrics before you ship, monitor them after, and create a feedback channel that users can actually reach.
The instinct to rebuild is understandable. When a product feels dated, a clean slate is an appealing idea. But a full rebuild is expensive, slow, and carries more risk than most organisations realise mid-project. And in many cases, it is not what the product actually needs.
What most legacy applications need is a clear-eyed audit: one that separates genuine architectural constraints from accumulated assumptions, identifies the highest-impact improvements within current limits, and produces a phased roadmap that teams can actually execute without destabilising the product or its users.
An audit-first approach does something else, too. It builds the evidence base for any larger investment decisions that follow. If deeper technical changes are eventually needed, you will make that case far more convincingly having already demonstrated what phased improvement can achieve.
Working with a legacy product that needs a better experience?Â
Vigo runs structured UX audits that turn user research and technical constraints into a prioritised roadmap for improvement. Get in touch with the Vigo team today to talk through where to start.
We use cookies for analytics and marketing. No data is shared with third parties.