The Operator’s Note · ERP

The post-go-live cliff: why month four is when ERP implementations actually fail.

Vendors throw a party at go-live. Operators know the party is six weeks too early.

The metric the vendor cares about is whether you went live on the date in the contract. The metric you should care about is whether the system is actually being used the way you bought it to be used. Those two metrics start to diverge the day after go-live, and by month four the gap is wide enough to swallow the whole implementation.

What changes at go-live

Go-live is a hand-off. The implementation team rotates off, mostly. The internal sponsor goes back to their day job. The daily cadence that kept the project moving stops. The bug log gets renamed “enhancement requests,” which is a quiet way of saying “we’re no longer fixing things on the implementation budget.”

And the people who actually have to use the system every day are now using it without the safety net that existed during testing.

None of this is wrong. Implementations have to end. Teams have to move on. The problem is that nobody owns what happens between week 6 and month 6, and almost everything that determines whether the implementation actually succeeded happens in that window.

The four failure patterns

Post-go-live failures show up in predictable shapes. The names are mine. The patterns are the same everywhere.

1. Adoption rot. Shadow systems re-emerge. Excel never died, it was just hiding during testing. Users go back to the spreadsheets they used before, run the new system as a system of record only, and quietly maintain the parallel universe where the real work happens. By month four you have an ERP and a hundred spreadsheets, and the spreadsheets are winning.

2. Workaround proliferation. Every gap gets a manual step. Those manual steps multiply. Each one is justified individually (“we’re just doing this until the next release”), and collectively they become the de facto process. Six months later, somebody tries to document “how we close the books” and the documentation is forty-three steps long, half of which exist outside the system.

3. Training debt. People learned the demo, not the system. The training in week one of go-live taught them the happy path. The edge cases that show up in real work were never covered, because they hadn’t been seen yet. Cross-training never happened, because everyone was busy keeping their own area running. Now one person knows the close process, one person knows the year-end roll, one person knows how to fix a stuck batch. Each of them is going on leave eventually.

4. Reporting drift. The new BI dashboards show worse numbers than the old ones. The data model is right. The data is dirty. Somebody migrated the historical data without re-validating the categorizations. Somebody changed a transaction type and didn’t update the report definition. Nobody trusts the reports, so nobody uses them, so nobody’s motivated to fix them, so they get worse.

How to detect the cliff early

You can’t prevent these patterns by hoping. You catch them by looking at the right things on a schedule, while the implementation team is still around to fix what you find.

  • Adoption metrics, weekly, for six months. Logins per user. Transactions per day. Time-in-app per role. Compare to your assumptions at selection. If the warehouse team was supposed to be doing 80 percent of their work in the new system and they’re at 35 percent, you have an adoption problem to solve, not an enhancement to schedule.
  • Workaround log. Every manual step recorded. Why does it exist. Is it temporary or permanent. If permanent, what configuration change closes the gap. Reviewed monthly. Without this log, workarounds are invisible until they’re structural.
  • Power-user check-ins, monthly. Not the steering committee. The actual users who run the close, ship the orders, post the journals. What’s harder than it should be? What did you give up on? What are you doing outside the system that you used to do inside the old one?
  • Re-validate the close at month 1, 3, and 6. The close process you tested in UAT is not the close process you’re running in production. New transaction types, new accounts, new partners, new edge cases. Re-validate. Find the drift before it’s embedded.
  • Independent reporting reconciliation. Pick three reports that matter. Reconcile them against the source of truth, monthly, for six months. If the numbers don’t match, fix the report or fix the data. Don’t let “the new system shows different numbers” become a quiet way of saying “we don’t use the new system for that.”

Who pays for the cliff

Not the implementation budget. The implementation budget closed at go-live. Operations pays.

Operations pays in the form of overtime to keep the spreadsheets going, in the form of process drift, in the form of people who quit because the new system was supposed to make their jobs better and didn’t. The cost is real. It just doesn’t show up as a line item in the project P&L.

What to plan for in selection

The post-go-live cliff is best handled by not falling off it. That means treating the months after go-live as a real phase of the project, with budget and ownership, before you sign the implementation contract.

  • Adoption support contract baked into the implementation SOW. 90-day or 180-day. Not as “additional services available,” but as a defined deliverable with named people on the partner side.
  • Training materials in your language, not the vendor’s. Process documentation written by your team during the project, signed off before go-live, owned by your power users. The vendor’s training is generic. Yours has to be specific.
  • A retained budget at 10 to 15 percent of implementation cost. Reserved for post-go-live work. Not for “phase two enhancements.” For finishing the implementation that everyone pretended was finished at go-live.
  • Process owner accountability shifted before go-live, not after. The accounting manager owns the close in the new system starting at UAT, not at month two when the implementation team has already left. They have to live with what gets configured, so they have to drive what gets configured.

Many of these failures trace back to evaluation. See The ERP demo is theater for what to demand at the selection stage.

ERP implementations don’t fail at go-live. They fail at month four when nobody’s looking and everybody assumed someone else owns the system now.

Working through this?

If you went live recently and the system isn’t behaving the way the demos suggested, I do a free 30-minute call to talk through what’s happening.

Don at DWK Solutions

Get in touch Subscribe to The Operator’s Note