Archive for July, 2012

Levels of Feedback in the Agile Enterprise

Recently I gave a lightning talk at Agile Atlanta on this subject. Here’s the Prezi. Feel free to click through that as you read along to this post!

The idea is that there are several types of feedback that we get on a project (I discuss three) and they occur at different levels – those levels are related to the Scaled Agile Framework from Dean Leffingwell and others that have either independently conceived it, or have built upon it. But we’ll get back to that.

The first type of feedback is project related and it’s facilitated by the Agile process. The thesis here is that Agile and other “traditional” SDLC methodologies are all guiding the same type of work: we have requirements of some sort, we write code, we test it, etc. What Agile does, intentionally, is create frequent feedback points. Scrum, for instance, talks about daily scrums, sprint planning, release planning, sprint reviews, demos, retrospectives – all of these are specific, and prescribed, points of feedback that are built into the process. There’s nothing about a waterfall process that says you can’t inject these points of feedback, it’s just that they’re not built-in.

The second type of feedback I discuss is Testing. For this I trot out the Agile Testing Quadrants originated by Brian Marick and built upon by Lisa Crispin and Janet Gregory. There are plenty of sources out there to get more information about the model, so I won’t do that here. The point is that testing itself deliberately provides feedback, and the different approaches and techniques represented in the model give you a variety and a frequency of feedback not, again, prescribed in traditional methods.

When most people talk about Agile, they’re referring to happenings at the team-level; they talk about the meetings, and perhaps XP practices. There are other levels in the Enterprise, and those are providing feedback too. In the typical model, there’s the team level, the product level, and the portfolio level. Agile process is happening at all these levels, as is testing. The frequency and types of both changes, but it happens and it’s critical.

There is a third type of feedback highlighted here, and it involves the idea of “Control Gates.” Control Gates are those steps in the process that are demanded by policy, laws, or some form of structural practicality. If there is a failure at any of these points, the work that happened before is rejected and must be resubmitted. The thesis at this point is that this is not feedback. It comes too late to provide value over and above the amount of re-work it incurs.

The most important aspect of the discussion involves moving aspects of “Control Gates” sooner in the process. For instance, if we do Pen Testing last, after all other aspects of the product are completed, we’re not getting valuable feedback. It’s too late for that. If that test fails, all progress grinds to a halt and we have to go back and fix old code – and then go through the whole deployment process again. A better option is to treat these control gates as a formality – not as a source of feedback. We should be able to run the same tests for every build if we want, every check-in possibly. By the time we get to that control gate, we should have run the same tests hundreds of times already.

This is true for any type of control gate. For example, lots of projects have a formal Support Transition process involving training and documentation and sign-offs. Instead, it would make more sense to have a support representative actually on the team, learning along with the team about the project. By the time the hand-off occurs, all the learning has happened and it’s a smooth and fast transition. It would be too late at that point, for example, to learn that Support had no means of monitoring or troubleshooting the application.

5 Comments