As I talk with companies around the world it's clear that a significant number of them are still mired in the productivity, efficiency, and optimization mud. It's easy to spot them because they are often maniacal about measuring velocity—team velocity, velocity across teams, rolling up velocity to an organizational level or even velocity per developer (yuck). Velocity is thereby killing agility. It's the ultimate in applying a reasonable tool for the wrong reasons:

  • Velocity is increasingly being used as a productivity measure (not the capacity calibration measure that it was intended to be) that focuses too much attention on the volume of story points delivered.
  • Focusing on volume detracts from the quality of the customer experience delivered and investing enough in the delivery engine (technical quality).
  • Giving the product owner/manager complete priority control makes the problem worse—we have gone from customer focus to customer control that further skews the balance of investing in new features versus the delivery engine.
  • Particularly for those parts of the business for which high responsiveness (a deployment cycle time of days or weeks) is critical, investment in the delivery engine is as critical as investing in new features.
  • Management needs to allocate resources between features and engine work and then create a product ownership team consisting of the product owner and technical leader to do feature prioritization.
  • Value (value points) and cycle time metrics will help balance the detrimental effects of velocity measures.

Over emphasis on velocity causes problems because of its wide used as a productivity measure. The proper use of velocity is as a calibration tool, a way to help do capacity-based planning, as Kent Beck describes in Extreme Programming: Embrace Change. Productivity measures in general make little sense in knowledge work—but that's fodder for another blog. Velocity is also a seductive measure because it's e asy to calculate. Even though story-points per iteration are calculated on the basis of releasable features, velocity at its core is about effort.

While Agile teams try to focus on delivering high value features, they get side-tracked by reporting on productivity. Time after time I hear about comments from managers or product owners, "Your velocity fell from 24 last iteration to 21 this time, what's wrong? Let's get it back up, or go even higher." In this scenario velocity has moved from a useful calibration tool (what is our capacity for the next iteration?) to a performance (productivity) measurement tool. This means that two things are short-changed: the quality of the customer experience (quantity of features over customer experience) and improving the delivery engine (technical quality).

Agility is the ability to both create and respond to change in order to prosper in a turbulent business environment. It means we have to build and sustain a continuous value delivery engine, not one that bangs out many features the first release and then rapidly declines in ability thereafter. The ultimate expression of agility from a software perspective is continuous delivery and deployment. Our goal should not be productivity, but to "Design and deploy a great customer experience quickly—again and again over time." In order to respond to business, technology, and competitor turbulence in the market we have to focus on delivering a superior customer experience, building new features, and improving the delivery engine that allows us to deliver quickly each and every release cycle.

Compounding the problem, the Agile movement has focused on high levels of customer involvement—basically a good thing—but we've gone too far. A large number of Agilists decry that they can't get organizations to focus on technical practices—but why should that be a surprise when we encourage product managers/owners to make all the priority decisions and then measure performance using velocity?  We have overcompensated for the lack of customer focus in traditional methodologies—by giving control of prioritization to the product owner/manager. The tendency has been to lump all work under the heading of "customer-facing" stories and assume we can convince product owners to agree to all the technical engine work that needs to be done. This is an over-correction from traditional projects in which months of techni cal work preceded any customer facing work (which was an even worse problem).

Product managers/owners understand prioritizing customer experience work, but they are generally not incented nor understand the technical engine work to be done. We need to create a product ownership team consisting of the product owner and a technical leader (which may include a quality assurance lead). Product owners are the customers of today while technical and quality leaders are the customers of tomorrow.

Imagine our software delivery engine as a high-powered jet engine in a fighter aircraft. What if we fail to perform adequate maintenance on that engine—it gets gunked up. What if we don't periodically re-build parts of the engine? In software delivery we gum up the engine by: ignoring technical debt, delaying refactorings, disregarding automated testing, under-investing in continuous integration tools and processes, and in accepting long deployment cycles. These things are often considered "technical niceties" rather than keeping the engine running at optimum capacity.

In Don Reinertsen's book, The Principles of Product Development Flow, he suggests the formula: Cycle time / Value added time = 1 / (1 – Utilization). Therefore, when teams optimize their process for velocity, they increase the amount of waste in the delivery process, and increase cycle time. Conversely, when teams optimize for cycle time, utilization (and thus velocity) goes down. There has to be a balance.

Business has moved from a world of periodic change to one of continuous change. In parallel to this, software development is evolving from project work (deliver software in a big batch and then maintain it) to one that is product oriented (deliver software in continuous releases). In order to drive behavior in the right direction we need to incorporate measures like value, delivery cycle time and technical quality metrics into our performance criteria. We need to calculate feature "value" points as well as feature "effort" points (story points). Cycle time can be an excellent measure because it depends on the software delivery engine working well, and, it's a measure business customers understand. When we say, "Do we want new features or quality (or technical debt reduction, etc.)" it's easy for product owners/customers to say, "of course new features." Instead we need to say, "How do we need to balance new feature delivery with cycle time reduction?" [Note: Cost of Delay, per Don Reinertsen's book, can also be a useful metric.]

This blog should not be interpreted as anti-velocity; it still has a place for capacity planning. The problem is the weight given to velocity and turning it into a productivity measure. Because it is easy to measure, we measure it. But even more important is measuring things like feature value, feature delivery cycle time, and quality measures (defects, technical debt, etc.). Delivering high-value features to customers should be paramount, but the focus should not be one-time delivery, but continuous delivery. The debate is not features or quality; it's balancing delivering features quickly today with increasing our ability to delivery features quickly tomorrow—over and over again. Business responsiveness is a capability, not a one-time event. Supporting that ongoing need for business responsiveness requires a high-power ed, well-oiled software responsiveness engine.

Note: Thanks to Martin Fowler, Jez Humble, and Ken Collier for their comments and ideas on this article.