Thousands of project managers are in a mad rush to push projects live. Why are PMs more than willing to go over budget than miss a deadline?
For business systems, the project go-live date really does matter - and project managers seem willing to sacrifice budget limits more often than they’re willing to blow past a scheduled deadline. There are sound business reasons for doing this as bonuses, commissions and stock valuations depend upon the revenues and earnings for the quarter. So you don’t want to make any changes that would mess up this quarter’s results, but you do want to implement changes as early as possible next quarter.
Now for the problem: You’ve got a few days to go, and the time pressure to do the cutover will only increase. And with that, the collective IQ of the project team will only decrease. Because of the 4th of weekend, everyone will be lobbying for a slash-cutover to the new system so everyone can make their quarterly bonuses.
In the final push to release, you’ll hear all kinds of helpful suggestions to expedite things:
➢ Skimp on unit tests, focus on integration testing.
➢ Do integration testing and user-acceptance testing in parallel.
➢ Skip the configuration control overhead.
➢ Do debugging and make fixes directly in production.
➢ Skip some of the harder test cases, particularly for integrations.
➢ Do functional testing only for the top three use-cases.
➢ Shift the user-acceptance testing to an offshore team.
➢ Worry about data quality after go-live.
➢ Bring in a tiger team of additional staff.
➢ Get the consultants to work double shifts.
➢ Forget about doing business process validation and code reviews.
➢ Don’t turn on all the integrations until later; do nightly batch data updates to keep things somewhat synchronized.
➢ Spin up a shadow copy of the new system and have a few low-impact users on it, so you can claim go-live even though the heavy-lifting users are still on the old system. Meanwhile, developers will work only on the real version of the new system where there are no users. (You can call this the “Potemkin Village” strategy - you’ll also need some low-cost offshore resources to do the triple-entry of data that will be required.)
➢ Don’t do short weekly, task-based, effective training. Instead, do it as one day-long session.
➢ Skip the luxury of hand-holding, developing power-user mavens, or go-live-week support. These things will just happen on their own, and our developers will need to get on to the next project immediately (because that one’s late, too).
In other words, you’ll hear all kinds of half-baked, rookie ideas coming from people who are otherwise professionals. This is the kind of stuff you’d immediately reject a job applicant for suggesting, but when the ideas come from a review committee full of bosses... well, you sometimes have to bite your tongue (and sometimes almost clear off).
But the industry doesn’t seem to be getting any better at this, particularly when it comes to projects with lots of dependencies, or with complexities that could have been avoided. I’m particularly irked about the kind of magical thinking that just begs for train wrecks: “well, while we’re at it, let’s add this risky artificial complexity to that out-of-control project…”
The answer is in the question
How to avoid this kind of project schedule crisis? Make the project less tightly coupled. We did it with software (asynchronous web services), why aren’t doing it with the project itself?
Try these ideas on for size:
➢ Make bonuses and other incentives contingent on achieving IT goals, but make sure they are not tied to specific go-live dates.
➢ To the extent possible, base all integrations on a real messaging system that logs all messages (and keeps the logs for at least a couple of weeks) to improve debugging, testability, reconciliation and recovery efforts.
➢ Just assume that every integration will have more dirty data and muddled semantics than you can imagine. So no matter how realistic the schedule is for migration, normalization and cleansing... add 25 percent to it. Really.
➢ Do the continuous integration thing: All code has unit tests, complete system builds nightly with test suites that grow along with the code, bring external integrations and some real data into the system as early as possible in the project.
➢ Do the user-centered design thing: get users working with the UI and evaluating the correctness of any results before you’ve reached 25 percent of the project spend… and keep them involved until the end.
➢ Do the staging / sandbox thing early: spring for a “full sandbox” or staging copy from every vendor that will sell you one. Use them from the mid-point of the project, at the latest. Really. Connect the sandboxes to each other so you have as realistic a development and testing environment as early as possible.
➢ Do the reality-therapy thing: your executive sponsors (you do have executive sponsors, don’t you?) should be giving demos of their part of the system to their teams at the end of each sprint. This is good for user adoption but it’s even better to make sure expectations about schedule and features don’t get out of hand.
➢ Do the prioritization thing: the way to avoid the end-of-project crash is to reduce deliverables along the way. This is really difficult, but it is the rare project indeed that has more budget and schedule than it needs. So lighten the load at every turn - enforce a feature diet.
➢ Do the game-theory thing: otherwise known as “have a backup plan,” this involves a serious discussion at least 6 weeks prior to go-live about how the business would handle a “no-go” decision.
➢ Un-do the kill-the-messenger thing: at every significant milestone review, give explicit permission to team members to reveal negative information. Through your words, actions and facial impressions, encourage people to be realistic about schedules and risks. Never punish the delivery boy for bad news, and don’t “yes men” and used-car-salesmen behavior.
➢ And the grand-daddy of them all - un-do the big-bang thing: use phased deployments, with each functional increment building upon a stable base from the previous cycle. Bring in historical data incrementally (start with last year’s data, then add data from two years ago, etc.) and be willing to have low-priority data available only offline for a while.