#
The Myth of the Perfect Estimate

*By Bradley A. Malone, PMP*

*This is the ninth in an ongoing series of organizational project management articles by InfoComm University™ senior instructor Brad Malone. **Read the eighth installment, *"Using a Work Breakdown Structure."*.*

Is it better for a project estimate to be precise or accurate? It’s my experience, over hundreds of audiovisual integration projects and hundreds more in numerous industries, that most people believe a precise estimate is more accurate — and that most of the time, “precise” and “accurate” are considered synonymous.

So let’s look at the science of estimation and dispel some myths and confusion. First, let’s define some terms: *Precision *typically means a degree of reproducibility or exactness; whereas *accuracy *is a degree of probability or veracity. Most people think that a precise (exact) estimate must also be the most accurate (or predictable). But nothing is further from the truth, especially when it comes to AV (or other) projects. That’s because projects are future-based and always have some level of uniqueness, whether it be the customer, equipment, location, project team or other subcontractors. And it’s this uniqueness — as well as the uncertain prospect of project risks — that does not allow for precision, no matter how much we desire it. The less certainty we have about a project, the more our estimate must include a measure of variance. Precision, therefore, is truly unattainable.

But if we’re looking to create an accurate estimate, we need to analyze its three characteristics: assumptions, methodology and presentation.

**Breaking Down Assumptions**

In an earlier article, I explained assumptions in detail. In a nutshell, assumptions are conditions that project stakeholders (client, sales, project manager, implementation team, etc.) believe are true now or will be true at some future point in time. These conditions form the basis of an estimate and often become driving factors in its determination.

How do assumptions factor into an estimate? Let’s break down a typical commute to work. Recently, I was with a client and asked her how long it took her to get to work in the morning and how long she’d been driving that same route.

“Seven minutes for 10 years,” she said.

“Every day?” I asked. She said, “Yes, everyday,”

“Really? No variance at all?” I asked.

“Well, anywhere between 5 and 9 minutes, with an average of 7 minutes,” she said.

Then I asked her another question, assuming she drove the same route every day: “Do those 5 to 9 minutes cover every commute?” Her answer: "Same route every day, but only 95 percent of the time it takes 5 to 9 minutes; 5 percent of the time it can be as high as 15 minutes.”

So let’s take a closer look at this scenario. Each individual commute is a recurring operation, and going to work every day for a decade provides us knowledge and certainty about the drive’s characteristics — 10 years of driving the same route results in approximately 2,000 past data points, which can be used to predict the future with some level of certainty. She initially gave me an exact number, which in reality was a number she averaged, but potentially never hit exactly. Looking at variance, 95% of the time she was between 5 and 9 minutes, and she averaged 7 minutes, +/-40% (at 95% probability). Taking into account that sometimes it took her 15 minutes, you have a variance from -40% to +112% at 100% probability.

What’s amazing to me is that when we actually measure operations (things we’ve done the same way thousands of times) we find that they often have much more variance in them than our project estimates do, even though our projects always contain uniqueness and uncertainty.

What conditions impacted her drive? She identified three factors that impacted her commute before she ever left her house: day of the week, time of day, and weather. And there were three factors that could impact her drive along the way: two different school bus stops and one potentially busy intersection. Think of the three in-drive factors as project milestones.

In order to make a more precise (less variant) estimate, we would make assumptions based on those six determinant factors. We could say, for example, that she can drive to work in 6 to 7 minutes on a sunny Tuesday, when she leaves between 7:15 and 7:30 a.m., and doesn’t encounter school buses or traffic at the intersection. We’ve made six assumptions that help us narrow the estimate and make it more precise. Are these assumptions true? And what happens when they’re not? We’ll address these questions in a moment.

**Methodologies for Estimating**

For now, let’s look at another characteristic of estimates, namely methodology. There are three basic methodologies used for estimating, with some variants. The first is a parametric estimate, where we know two or three parameters of a project and we extrapolate a range from those parameters. These parameters can be anything from the number of classrooms to be installed (including projectors, Smart boards and controls), to square footage (if building a house), to a number of stories (if building a skyscraper). The range of a parametric estimate is typically -25% to +75%, but can be higher (software development projects can be as high as -50% to +400% at their inception). The key is to determine the most important and meaningful parameters and then collect data that correlates to those parameters. This is the also the quickest form of estimation, but it can be risky if the parameters are not selected or measured correctly.

The second methodology is the analogy, or top-down, methodology. In this method the estimating team looks at previous, similar projects at the work breakdown structure (WBS) level 2 or 3 and estimates costs based upon the likelihood that the current project will correlate to those of the past. The key to this methodology is to understand the analogies, but you should also have an effort- and cost-tracking system that aligns with the WBS. Using a common site-survey form will help develop meaningful analogies across projects. Many times, I’ve seen companies use the analogy methodology with no relevant historical data, in which case they’re really just guessing. The range of an analogy estimate is typically -10% to +25%, but often can be wider.

The third methodology is the bottom-up estimate, where *actual *data from a current project is used to extrapolate a forecast for the remaining work. It’s often employed in conjunction with the analogy methodology. If we determined that 50 classrooms would take a day apiece (two techs, 16 labor hours, plus or minus two hours), and we found that the first four classrooms took 17 hours, we would use those numbers to move forward. Our range using the bottom-up methodology is typically -5 % to +10%. When it comes to forecasting within the bottom-up system, some companies, for example, will make a precise estimate of 16 hours per classroom. When the first four classrooms take 18 hours, they then forecast — often erroneously — that the rest of the classrooms will take less time, based on the learning curve, and that crews can speed up enough to make up for the overage of the first four classrooms.

I’ve found that established trends usually beat out wishful thinking. It’s also worth noting that the commute-to-work scenario had a wider variant range than the typical analogy and bottom-up ranges. That just means that valid and numerous assumptions are critical when using estimation methodologies in situations where the estimate needs to have smaller variation.

I’ve seen other methodologies for estimating jobs. There’s the SWAG method (scientific or silly wild-a** guess, depending on who’s going the guessing); the checkbook method (how much does the client have in its checkbook?); the approval authority method (how much can the client approve?) and the expectation method (how much did the client *think *it was going to cost?). These methods may help you win jobs, but they can also help bankrupt the company.

**Presenting the Estimate**

Now let’s talk about presenting the estimate. A valid estimate must be a conveyed as a range, or as a point with confidence factors. The drive to work estimate could be presented as between 5 and 9 minutes with a 95% probability, 7 minutes with a variance of +/- 40 percent (at 95% probability), or 7 minutes with a variance of –40% to +112% (at 100% probability). A single, precise number does not make a valid estimate.

Given the three components of an estimate, what do we do once a project starts? Let’s look again at the commute example. We made three key assumptions for conditions that existed prior to the drive (time of departure, day of week, and weather), and three based on conditions during the drive (no school buses at two stops and smooth sailing through the intersection) in order to arrive at an estimate of 6 to 7 minutes. We would expect the commuter to update the conditions as she knew them, which would impact the forecasted estimate. Maybe she doesn’t get out of her house until after 7:30 a.m., at which point she knows 6 minutes is no longer possible. Then maybe she encounters a school bus at the second stop, which puts her drive estimate at closer to 8 minutes. She makes it through the intersection as planned and arrives in 7 minutes 40 seconds.

Measuring the actual occurrence must be precise, in order to learn from it, but initial estimates must be in a range. I often see the opposite: The estimate is precise but tracking actuals is vague. Or the loop between the actuals and the people estimating is never closed.

I also often find that even though assumptions are written into a scope of work, they’re rarely tracked or used to make changes to the estimate or forecast the result. In the driving example, three conditions could be verified at the beginning of the commute and three others could be turned into discreet milestones — places in the project where we have important knowledge with which to update our estimate.

We must always reward our people for telling the truth when it comes to actuals and variances, otherwise we’ll never know whether our current estimates are valid. We’ll also never learn how to estimate better, and we’ll be blinded (or blind-sided) by over-confidence.

Valid estimation does not have to be onerous, but it does have to be thoughtful and follow some prescribed guidelines. What were the key factors, conditions and assumptions that impact the estimate? What methodology will be used and can we substantiate its use? How can we present our estimate in a meaningful way to show that there will be the potential for variance? If it’s a fixed-price bid, how much risk are we willing to take on, and where in our estimate range will we establish our price? And finally, how do we track actual occurrences in order to continually revise our assumptions and validate our estimating information?

Precise estimates will never be correct, but the more you know, the closer you can get to a valid, realistic, and therefore perfect estimate.

*Bradley A. Malone, PMP, is an InfoComm University™ senior instructor and president of Twin Star Consulting, an organizational excellence and program management consulting company serving multiple industries. He holds the Project Management Professional (PMP®) designation from the Project Management Institute (PMI) and is one of PMI’s highest-rated instructors. Please share your thoughts with him at **brad@twinstarconsulting.com**.*