Velocity, Estimates, and Cost

I recently read a blog post by Gojko Adzic (see here) about velocity and using its measurement. It’s a good post — it warns of relying on velocity as an indicator of success. In short, he argues that while low velocity can point to problems, a high-enough velocity doesn’t imply long-term success.

I agree. I also think velocity can be used to revisit estimated completion dates and costs. Much as developers don’t like those notions (I know I don’t).

One of the inescapable issues of software development, at least for me (if you have found a way to escape it, please share it), is the need for upper management to always have an “end date” for the work in sight. For better or worse, estimated completion dates are used to inform high-level budgets, resource allocation, and so forth.

Naturally, regardless of what estimation methods developers use, “the business” wants hours. The idea is to convert those hours to dollars.

But estimating in hours sucks because it’s not reliable (let’s be honest) and it doesn’t really yield a velocity. So some people end up with an arbitrary conversion of hours to “story points” (maybe 1 point is 1 day which is 6 hours or something like that). Which makes the word “point” mean “X hours” and nothing more.

Well, that’s a horrible thing to do. For one, what becomes of velocity? If all you have to measure rate of delivery with is hours, then isn’t that just how many hours of work a team does? Why even measure that number? It will stay constant from iteration to iteration unless you add or remove team members.

If velocity were to actually reflect the pace at which a team is delivering, then it could be used as a predictor. But I submit that then it can’t be tied to hours!

I accept that some things cannot be [easily] changed. Budget approvals have to happen very early in a project’s lifecycle and so you need a relatively high cost and time estimate because the earlier in the lifecycle you are, the more unknowns you have to accept. And no matter what you use to estimate that early in the game, your margin of error will be very large.

But the cool thing about having a properly measured velocity is that revisiting those estimates becomes a trivial exercise. And there’s some value to that, since the sooner it is known that work is taking longer than initially promised, the more time there is to make appropriate adjustments. Conversely, the sooner it is known that work is taking less time than intially expected, the sooner any benefits of this realization can be leveraged.

How to do it? Well like I already said, hours suck, so stop using hours altogether. They are too precise and they mean different things for different people in terms of effort. Using points of complexity is less precise, particularly if you go with the usual sequence of 1, 2, 3, 5, 8, and more universal. Whether a 2-point deliverable takes me one day or takes another developer half a day, it’s the same amount of complexity delivered, and that’s cool because now you are able to look at delivering more/less points per unit of time (proper velocity!).

I am serious about not using hours. If someone asks you “how many hours is a point?” you say “NO.” If someone asks you “how long will this take?” you say “NO.” Points are a swag at relative complexity. That’s it. There is no conversion and there are no hours. Deliverables have points as metadata and those points are used by the team to derive velocity — points delivered per iteration. And from velocity, a few other things can be derived.

One is an approximate amount of work for the team to pull into the next iteration. Historical data  — the velocity and its trend over the last few iterations — can give the team an indicator of about how much work it can reasonably commit to right now.

The other thing is more relevant to this blog entry, and that is a prognosis for the completion date assuming the current backlog. The backlog is continuously groomed, with work broken down for easier estimation and work added or removed as business needs dictate. A team can calculate their velocity, weighted by recent trends, and use that to get a rough count of how many iterations would be needed to clear out the current backlog.

How do you arrive at a weighted velocity? However you want, as long as it makes sense. I’ve used a formula I basically made up on the spot and it has worked “okay.” Given n is the latest completed iteration and n-1 is the previous iteration:

Weighted Velocity = Vn * .5 + Vn-1 * .3 + Vn-2 * .2

You can fiddle with the weighting factors; I don’t know how good they are. I just want to try and capture a trend if there is one and also help diminish the effect of any outliers (e.g. everyone having the flu during some iteration).

Anyway, so you get your velocity, and you look at how many points of work currently remain in your product backlog, and you do simple division and round up. With a velocity of 20 and 175 points remaining, you have 9 more iterations to go. If each iteration is 2 weeks and your team costs you about $400 per hour altogether then you are looking at $400 * 80 * 9, which yields $288000, so you round to $300000 to preserve the appropriate number of significant digits.

Bam. 9 weeks and about $300k. And that’s as of right now. Look at this at the end of every iteration and adjust. You still should not think of these numbers as any sort of promise, but at least the numbers themselves will be better and better informed, and hopefully closer and closer to accurate as work goes on.

Is there a whole lot of value to doing this exercise with your project? Well, if you don’t have a need to predict completion date and remaining cost,  then of course don’t waste your time with this stuff altogether. But if you do have that need, and many organizations continue to, then I think this is a reasonable approach.

Though, if you have ideas about how it can be changed for the better, I am very interested in hearing them.

1 thought on “Velocity, Estimates, and Cost”

Leave a Reply

Your email address will not be published. Required fields are marked *