In today’s world, information is vital. It drives decisions, informs goals, and assesses initiatives. In software development, information is abundant, many of which are readily available from the tools we use to manage our work. Sadly, it is one-sided and only tells part of the story.
Strategic Software Management and Measurement
In any strategic-oriented organization, measurement needs and reports flow in 2 opposite directions.
Bottom-up information flow
What comes from our tools (and possibly some manual input) is operational data. This data is originated at the individual-contributors level (primarily) and is context specific. Because of where it originates from, it has a high volume of data, different measures that need to be combined to be meaningful, and are highly contextual. So contextual that only at that level they will make sense for who is reading the numbers. Therefore, they need to be summarized and analyzed at the operational level, by lower-level management (AKA “B-level”).
In software, we are talking about basic measures like story points (or any other measure of software size), effort, number of defects, etc., as well as derived measures such as velocity (or any other measure of productivity), defect density, escaped defects rate, etc.
Top-down information flow
If the operational level reports their results, they need to know which results matter. In other words, if the organizational strategy says “let’s improve on quality,” leaving things like defect density and escaped defects rate aside to report on velocity may not be valuable. They don’t align with the strategy and won’t provide insights on achieving the organizational goals. Therefore, higher management levels must provide their goals, strategy, and information needs to the operational level.
Without the top-down (goals and strategies), the operational level doesn’t know what to report. Either they will report many useless data or not enough. In any case, the risk you run is that middle-management and higher management will start looking at operational data. A lot of precious time will be lost digging into detailed data without contextual information. This is a prevalent scenario where everyday things often won’t make much sense, triggering questions to the operational level that will have to inform on the context, usually in a communication format that will feel like having to “justify or explain themselves.” This situation will make them feel wary and not want to have these meetings.
Not requesting pertinent information to operational levels is a frequent problem in many organizations because they lack the proper strategy definitions to do it right. This great Harvard Business Review article describes good examples of high-level strategies: “Customer Intimacy and Other Value Disciplines.” It highlights three values: Operational Excellence (run things well and cheap with one-size-fits-all solutions to reduce costs and prices), Customer Intimacy (customizable solutions to support a variety of clients that are very well known), and Product Leadership (innovative products or features that will give your clients an advantage or keep them hooked up by them). If you want to learn more about these strategies, you can read this excellent book:
So, let’s say your organization has set goals around a market leadership strategy of Operational Excellence. In this case, you may want to reduce costs by increasing productivity, which may mean looking at velocity, capacity, or another productivity measurement. You may also want to look at escaped defects, as these would increase your maintenance costs substantially.
If the organization is aiming for Customer Intimacy, you will likely be focusing on features requested by clients, so you may want to increase your predictability and estimations. To improve that, you may wish to analyze scope change, risk management, percentage of backlog items with estimates, etc.
However, suppose the organization is pursuing Product Leadership. In that case, you can track how many suggestions of new features and improvements are coming from the teams, how many are getting accepted in the backlog, time-to-market, and productivity (as you don’t want your competitors catching up).
So, if you want to start measuring your organizational performance, start top-down, with goals and strategies, then you drill down to the operational level. It’s simpler than it looks and much healthier for the whole organization.
Cascading goals through the organization
As much as local optimizations may not deliver global optimizations, the same logic applies to breaking down goals to different organizational units (departments, teams, etc.). As you distribute organizational goals to various departments and teams, you need to consider two different dimensions:
- Achieving capability: Can that team/department achieve the defined goal? I’ve seen organizations state goals like “let’s reduce our defect backlog by 50% during the next quarter.” It’s a good goal, precise, and well-defined. While it means fixing 5 defects for a team with 10 defects on their backlog, it may mean having to fix 50 defects for another team with 100 defects. Is that team capable of delivering the desired results? Is it a reasonable ask? If the team with 100 defects can only fix 20 in that period, achieving the organizational goal may mean that some teams will have to fix most, if not all, of their defects.
- Measurement level: Is the goal measurement been done at that team/department level? Let’s examine a similar goal to reduce the defect backlog by 50%. Now, assume that in your Engineering department, you have a few research and development teams responsible for developing new features, a few for enhancing existing features and developing on-demand projects for clients, and, lastly, two support teams responsible for fixing defects. Does it make sense to demand a reduction in the defect backlog from the research and development team? Should they be diverted to fixing defects? Should this goal be used to measure their performance? Maybe they could have a goal related to defect density reduction on the new features they produce, but not necessarily fixing defects. This is just an example; I’m not advocating that you should divide your teams like that, ok? :-)
Be accountable, manager!
Reporting and asking for information can be tense and may cause friction. In my experience, most of it comes from people feeling they need to justify or defend themselves, and that measurement will work against them. The guidelines above will help you get through that. The other hurdle is to know when and how you need to step in. If you are managing people, you know they are doing their best, so the last thing you want to do is to tell them to “improve the numbers.” You want them to improve their practices, which will improve the numbers. If the numbers show that improvements need to happen, you should not get in their way. It’s their responsibility to improve, but it’s your accountability.
You first want to ask if they have a plan to improve and what that is, or when they will present a plan. Assess it and give them feedback. Suggestions are always welcome, but you didn’t hire good people to tell them what to do. So if they need to try things to learn, give them space. Encourage them to ask for your help to come up with a plan. You don’t want to incentivize or push; you want to show you can help and would be happy to do so. You can’t demand trust. You give it for free and hope it will come. Maybe you are trying too hard (or micromanaging) if it doesn’t.
I hope these guidelines help you communicate your measurement expectations better. Feel free to share ideas, suggestions, or criticism in the comments!
If you like this post, please share it (you can use the buttons in the end of this post). It will help me a lot and keep me motivated to write more. Also, subscribe to get notified of new posts when they come out.