Posts Tagged agile

Failing Fast

20120418-111031.jpgMy company has a Director of Innovation. Joe is, predictably, a pretty inspiring guy and often sends out emails with interesting thoughts and ideas to spur innovation. Recently he sent out a message talking about the benefits of Failing Fast. In it he referenced the Gossamer Condor, the first human-powered aircraft capable of flying in a controlled fashion for more than a mile. Paul MacCready won the contest in 1977, 20 years after the contest began. Why did it take so long? Well, the biggest challenge was that failure was too costly. The designs were often good, but cycles of assembling the materials, testing, repairing and retrying could take as long as a year. As the attempts wore on, it became more and more difficult to get them financed.
MacCready took a different approach. He knew that no matter how good his designs, the likely result would be failure, just like all the others before him. As a result, he expected failure and built his prototypes with that in mind. He use common and cheap materials that were easy to work with and easy to repair and replace. As a result he could test new designs in a matter of weeks rather than years.
So how do we apply a fail-first thinking in software development? Here are some approaches, techniques and models that are borne of this philosophy that can have effects at all levels of your organization:

Team Level:
Test Driven Development (TDD) –
This is an approach used by programmers at the lowest level of software development. The idea here is you write a test that will initially fail because the code hasn’t been written yet. Then you write code until it passes. Certainly this helps you fail fast, in fact, it encourages you to fail first! The true power, though, is in the freedom it gives developers to try things they otherwise wouldn’t for fear of failure. With a battery of tests already created, the programmer is free to experiment and try out more innovative designs because they know they have that safety net. Failures are reported instantaneously, as are successes.

Executable Specifications –
With this approach, analysts and designers are called upon to work with programmers and testers to create requirements which are written with examples that can be hooked to the code.  When the code is written to do what the business wants, these examples pass.  If it isn’t, the tests won’t pass.  Now that’s fast failure.  In the past, such problems (programmers and testers misunderstanding requirements) might not have gotten caught until much later in the process.   Of course, all this presumes the design or feature was correct.
If you’ve been doing this all along, a new feature might be proposed; the tests are written and the feature is coded and the tests pass, however an older test fails.  Just as in TDD, you now have a safety net of “living” documentation.  Before you had to pore through the requirements documents to find the inconsistencies; now all you need to do is run them.  Additionally, if done properly, these specifications become the documentation.  There’s no worry about them becoming out of date or inaccurate with respect to the code because they are directly tied to the code.

Project level:  
Short Iterations –
This is a tenet of Agile methodologies.   Teams complete releasable units of prioritized work in a short and predictable amount of time.  If we need to change course because of some unexpected event, we don’t have to scrap the foundations of work that won’t be complete for months, we only have to scrap, at most the amount of work related to the length of the iteration (or, for you Kanban folks, the cycle-time for the largest unit of work in process).

Small, more frequent releases –
In some situations, there are restrictions on how frequent and how small we can make our releases due to overhead (training, deployment, regulatory testing) but we should be working on making these as small as possible.  Just like with iterations, if we make the wrong call on a feature, we can react faster – instead of 6 months to a year to implement a change, we can potentially do it in an iteration’s time, with no disruption to the business as would be the case with emergency hotfixes/patches, etc.

Business level –
Feature flow and prioritization –
Too often businesses decide on goals unique to each business unit and then those units create projects independently to meet those goals.  Often these projects compete with each other for resources and capital when only parts of them are actually directed at the business goals.  We shouldn’t be focused on the projects, but rather the features in those projects that are most important.  Models exist that show how to link these goals to prioritized features.  The business units are now not concerned with independent projects, but rather they are working on those aspects of the features that they have the capability to produce.  The business units can then speak in terms of continuous delivery of features within products rather than entire projects released all at once.  This keeps us focused on what we believe are the most important things given the best information we have at that time.  If it’s wrong, we can adjust our backlog of features based on the new priorities. The alternative is to release updates based on information from a year or more ago and to cancel projects with nothing to show for them.

, , , , , ,

Leave a comment

The Embedded Tester

Hat and JacketI hear it a lot at conferences, work meetings, interviews, discussions with managers, etc., the idea of “embedded testers.” The utterances are along the lines of, “Our testers are embedded on development teams!” or “How do we embed our testers with our developers?” It unfailingly puts me in the mind of a reporter from say, CNN wearing an ill-fitting helmet and flak jacket*, standing there as well-drilled troops go about their business. These journalists are working alongside the military, sometimes getting into actual firefights and other dangerous situations, but they’re not part of the team. They are most certainly outsiders with a different agenda (although it doesn’t always work out that way – more on that later).

When we talk about embedding testers – when we use that language – we’re implying that we’re taking a member of a separate Quality Assurance group and dropping them into a team of programmers, much like dropping a reporter into a military unit and sending them to the front line. No wonder testers are apprehensive and no wonder developers are resentful. Just using the phrase suggests a cargo-cult mentality is at work; we show a misunderstanding of the reasons for doing it and whether they’ll even apply in our particular context. And as a manager, when you use this phrase, you’re thinking in the back (or perhaps the front) of your mind that, just like a news organization, you can simply pull that person out of the team and back into the testing pool and the team will be unaffected.

In Agile, testers aren’t “embedded” on teams any more than programmers are, or analysts, or any other role that is needed on the team. To say that they are suggests that this is an option, or a particular strategy you might employ to help with Agile development. It’s not! It’s an essential part of it!

And no, I’m not saying that all testing has to be done by the team. There are reasons why you might want, or be legally obligated, to have independent testers outside of the team. Additionally, you’ll want your users involved in evaluating your work.

So above I mentioned I would talk more about the “different agenda” of embedded journalists. The social science is pretty new on all of this, but it turns out that there’s a bit of Stockholm Syndrome involved. These people are together under quite stressful situations, and often, in spite of their roles as impartial providers of information, they become overly sympathetic to the point that they glorify the actions of the troops and neglect, or leave out truths related to the enemy combatants. In fact, administrations have counted on this, encouraging embedding in order to provide more support for the war in Iraq.

“Aha!” you say. “This is exactly what managers are afraid of! Testers fraternizing too closely with programmers to the point of hiding quality problems from management! You need that adversarial relationship between devs and testers!” Ok, stop. First of all, shame on you for drawing parallels between software development and war! (ahem) And Second, I *knew* you didn’t really want testers as part of the team! (pwned) Look, Agile teams are perfectly capable of testing their own stuff and pitting testers against programmers is a fast track for making testing irrelevant. You’ll need to establish trust and choose metrics that reflect the goal of the team. Then you’ll have a good team, and not just a bunch of people embedded together.

* It turns out that these days the military won’t spring for such gear – the reporter is responsible for their own. (Then again, maybe that contributes to the lack of fit.) However, it’s not like the military is saving much by doing this.

, , , , , , ,

4 Comments

Contract and Integration Service Testing in a Componentized Enterprise Model

Contract Testing

(Presented at Agile Atlanta on May 1st, 2012 – Prezi)

I’ve been working a lot over the past six months on creating an automation framework for our service development teams. In our relatively large organization, we have have teams devoted to different horizontal strata of software development. So, for good or ill, we have database teams, mainframe/API teams, service layer teams, and UI teams. If you’re at that service layer, and that’s all you do, you should be taking a different approach to testing than if you had to do all of the testing for vertical slices of product functionality. With that in mind, here’s a proposal for a presentation on the strategies and the automation framework we’ve come up with. I welcome any feedback or thoughts!

————————————————————–

How do we ensure appropriate testing and feedback for large Agile Enterprise projects where the value delivered may not be end-user facing features? Such is the case for many enterprises working to establish a Service Oriented Architecture. A lot of knowledge exists for agile teams delivering customer-facing products. Much of that work focuses on acceptance tests being written in clear business language that is understandable by the whole team and stakeholders. This approach becomes more complicated for component delivery teams developing Shared Services that are often seen in large Enterprise development organizations.

Services can be tested in a number of ways and the approach that’s best can vary greatly based on the context of the application and environment. The presentation discusses common patterns of testing services and focuses on an acceptance test solution appropriate for teams working on the shared service layer within a componentized Enterprise model. The solution is based on a “universal” fixture that works for any type of object-based service and is accessed via table-based acceptance tests in FitNesse.

Process/Mechanics
The talk will be of a standard presentation format with slides and discussion. The first part of the presentation provides background on testing in an Agile Enterprise model and the various approaches to Service testing. (15) The second part of the presentation discusses the creation of specific framework to aid in service contract verification that is helpful for specialized teams working in a componentized Enterprise development model. (15) The last part gives examples on basic usage and on how the framework is used in a real-world environment. (30)

Learning outcomes
Knowledge of how Agile in the Enterprise affects our approaches to testing and feedback
Understanding of various methods of service testing and how to choose what’s right for your team
Details and examples of how to employ a framework for testing services appropriate for the componentized Enterprise model.

, , ,

Leave a comment