Archive for category Testing

Principles of Agile Testing

A few weeks ago my company had an internal conference called “The Fiserv Leaders Conference on Testing” and I, along with some others, was tapped to do a talk on Agile and testing. We decided to write down what we feel are the principles of Agile testing (similar to the principles laid down in the Agile Manifesto). This isn’t what we ended up with, but this is my list and thus I will record it here.

Before I do, I should acknowledge this isn’t a new endeavor. Others have done this before. I particularly like this one from Karen Greaves and Sam Laing.  And I’ve certainly borrowed from these ideas in coming up with this stuff below.  I don’t think it breaks any ground, but this is where my head is currently at, and how I frame testing in my current organization.

Testing is a whole team activity:

Everyone is responsible for quality. When a story, or work item, is marked done, that means the team is agreeing that the work is done — the whole team, not just the testers. Everyone is accountable for it, thus everyone should be involved in testing. Anyone on the team can take on the “role” of tester, just as anyone on the team can program, or write a story. Working together is the best way to get the best quality work done.

Continuous feedback is essential:

Agile works primarily because of its built-in feedback loops that happen much more often than in waterfall or even in earlier iterative frameworks (Spiral, RUP). Scrum, for instance, has daily feedback via the daily Scrum. Then there’s feedback at every sprint boundary with the Review, Retrospective and Sprint Planning. And beyond that Release planning. When you include XP, and other common agile testing practices you create even more feedback loops on top of these. The Agile Testing Quadrants model expresses that quite well (and perhaps this does it even better). One thing to note is that “checkpoints”, “milestones” – things like that are NOT feedback. Those checks come too late (by definition). So, for example, organization mandated Control gates like sign offs, Pen testing, or Performance testing that must occur before a product is released are not valuable sources of feedback. They occur too late for us to react. Failures at these points almost always result in delays and overruns which decreases value. Check out this prezi for illustration.

Agile testing demands flexibility and the ability to respond to change:

Working on an agile team should not be like working on an assembly line. Products change, people change, environments change, organizations change, to expect one’s daily role or work to stay the same through all of that is unrealistic at best. A core principle of Agile is the ability to not only respond to change, but to be open to it. At a practical level, that means things like:

  • Changing the way you write tests to suit the needs of the team.
  • Adopting a new tool to solve a problem, or removing the use of a tool that’s no longer provides enough value to the team
  • Taking time from the sprint to learn and practice new techniques
  • Giving up some responsibilities or picking up new ones to improve the throughput of the team

Be a source of information:

Testers are not the assurers of quality (and honestly they never were). Testers provide information about how the product does, or doesn’t behave. And that is valuable! So provide that. Update stories with information about system behaviour. Be proactive with updates to the product owner. Continuously work with programmers to match up expectations with reality. Communicate with other teams and stakeholders who have questions about the system. And above all, have the courage to provide that information even if you think it won’t be well received!

Simplicity is a virtue:

When providing information, or writing tests, or executing tests, make it as simple as possible. Complexity is the enemy of information and should be avoided. When communicating about the system, don’t hide behind misleading metrics or test documentation that nobody reads. CYA is not a key tenet of Agile! When writing a test, remove useless or obfuscating information. For example, you shouldn’t need exhaustive steps – you are part of a team that knows your software. You’re no longer throwing it over a wall to a test team that has no experience or familiarity with the product. Execution should be as simple as possible. Most testing does not need to be integrated across services, products and platforms! You are usually testing a single behaviour within the confines of your system. If so, then make sure that’s exactly what you test – use test doubles (mocks, shims, stubs, etc.) as often as possible. Use tools or automation to make execution easier and faster if warranted. If automating, make it obvious what you are checking – name checks properly and make it easy to see the result matches the expectation. Design systems with testability in mind – make it easy (or at least possible!) to test the individual parts of the system.

Advertisement

3 Comments

Agile at Daxko

I had the opportunity to visit Daxko through Agile Atlanta and their Agile Tours. I had already heard of the company and their foray into agile through Claire Moss who is, by the way, an excellent world-renowned tester and local Atlantan. That’s why when I heard about the Tour, I knew I had to sign up.

The team we were introduced to was fairly typical. It’s a small team with a couple of programmers, a tester, and a UX designer. Additionally they have a design lead and a development lead, who along with the designer formed a Product Owner team. Now it’s not unusual to see a PO team these days, but I did think it interesting to see one on such a small team. I think it’s common for teams to be burdened a bit on “grooming” process; they’d rather be sprinting on current work than thinking about future plans. On the other hand, leaving the entire grooming process up to a single PO can be overwhelming.

The product owner team refines ideas until they’re ready to be worked on as stories for a given sprint.

This team appears very committed to customer experience. The fact that they have a dedicated UX designer who makes it a point to speak with and visit with customers on a regular basis is a testament to that. The designer showed how she spends time (along with input from the team) making exemplars of their customers through the use of personas. She was very quick to point out that these personas are evolving and can’t, by their very nature, be viewed as explicit representations of what all their customers are like, but it’s a helpful guide for what types of features and the type of presentation they should be shooting for in their products. This analysis along with frequent user-testing of concepts helps create ideas that the product owner team then grooms into actual stories that the team will work on. (Another point driven home by the designer was that she no longer has to create wireframes of interfaces ahead of time. If she wants to prove out a design, she creates it right within a browser. When working on stories, she can pair with developers and build out “real” UI components and shape workflows through system within the sprint! Shocking!) The image to the left shows their epic board (my term). They take these large, and somewhat unproven concepts and groom them, with increasing involvement from the team as they move to the right, until they are ready to be worked on within a sprint.

The team uses story mapping to help plan out their stories and schedule.

To determine what stories to work on and when, they use Story Mapping. In fact, they brought in Jeff Patton to help show them how to do it. The image on the right shows an example. That top row of orange cards represents the actions their users take, in the general order they are performed. This again is learned from the research done directly with customers. Then, the most essential parts are written on the yellow cards. That first row of yellow cards forms what’s commonly known as a “walking skeleton” or the basic functionality you’d need to provide business value. Then the team continued to add additional functionality further down on the map; the lower the importance, the lower they appear. The blue tape shows how the pieces of functionality should be released as a collection, so this board shows that there are likely to be three planned pushes of functionality.

A list of planned releases of functionality color-coded by confidence.

Which leads us to the Roadmap depicted to the left. It shows a listing of planned dates and the content that will be delivered on those dates. Note the color-coding. The items in green are dates they are fairly certain they can meet while as things go out further, the confidence decreases. Due to the nature of their business, making firm commitments even 5 months out is a dicey proposition since market needs and priorities can change quickly. Thus, even though the team feels pretty confident they could hit a date that far in the future, it doesn’t make sense to commit to it until they know more information. Seems like a decent implementation of Real Options to me. I got the impression that the rest of the company isn’t completely enamored of this practice. I can imagine that there may be some larger efforts that absolutely need to be delivered by a certain date for business reasons (say, fiscal year end, for instance). In those circumstances it’s not a question of when you deliver, but what. Now that’s a relatively basic agile concept of changing scope but fixing date. If everything is flexible, well, that’s great and I’ll be interested to see if they continue to have success doing things this way.

A basic Kanban-style board with some interesting wrinkles.

And that’s where we transition from the Product Owner team to the rest of the team, what they term the Product Team. On the right you can see how they track their progress. Much of it should be fairly obvious to anyone with experience with Kanban boards. the Green stickies on the left represent the stories, the yellow stickies represent tasks, the red stickies represent defects and the blue stickies represent acceptance tests. The yellow stickies get moved to the right as they go from “In Progress” to “Done.” There are a couple of differences from most boards that I’ve seen that make things interesting. First, note the little magnets connected to some of the tasks. Those represent team members. Each person gets three and you can have no more that on the board at any time. This keeps their WIP limit under control. Genius! Second, do you see those darts on the upper left of the image? They are used to target a note to let people know that this is something someone is stuck on and they need help. Basically it’s an indication that the team should treat this with urgency and collaborate on a response ahead of anything else. Third, note that there is a division a little past halfway on the board. The Acceptance tests are tracked as specific items on the board. Only if those acceptance tests are all in the Pass column can a story be considered done (even if all the tasks are in the Done column). And really, this is the true indication that you’ve completed some work – does it do what we need it to do? There is a growing trend of folks tracking acceptance tests rather than stories/tasks as the true indication of whether work is Done. I like this idea quite a bit. One of the product owners described it as the difference between Output and Outcome. You need to have output from the team, but the reason you do it is to provide the desired outcome, ultimately, that is what you want to track. Finally, look down at the left bottom of the picture. You see a set of stories that are “on deck.” These are groomed and ready to be worked on. If work gets done early in the sprint, or even if priorities change, one or more of these can easily be added to the board. This is another basic agile rule-of-thumb that you should have at least 2 sprints of work groomed and ready at all times.

Big visible information is available all throughout the Daxko team room.

It’s critical to be absolutely clear on what it means to your team to be “Done” with a story. I like how this team has not only defined this, but made it into a big sign and visible to anyone around. It’s also evident that this is not boilerplate but specific to this team (see the mention of browser types and resolutions) and that it’s evolving. The team was very explicit that they understand that any process or practice they have is only in use as long as it provides value to the team. If they deem it isn’t, they change it or remove it. One example is the burnup chart to the left. They mentioned that this was the first sprint that they were tracking in this way. I pointed out that a flat line for the first four days of the sprint was of dubious value. Perhaps smaller stories would be in order to lower the risk of not delivering anything in a sprint. Additionally, they’re also just starting to track cycle-time. They define it as the time they begin work on a story to the time it’s available for customer use (they’re also tracking the time at which a story is done). I’ll be interested to find out how that information will help them in the future. And speaking of completing stories, in the bottom left of the picture you can see the metal frame of a large bell. They ring that bell whenever a story is complete. There’s always room for a nice Pavlovian reward if it helps the team’s morale!

After each sprint, the team has a retrospective. They showed off a whiteboard from the last session that used the metaphor of the team as a ship with wind driving the team forward and anchors weighing the team down. Hey, whatever floats your boat! The ScrumMaster said that he changes up the metaphors and retro techniques to help keep things lively and fresh. The team also does demonstrations of their work to the greater company. Every other Tuesday, this team, and the other teams across the company, have a video conference where each team gets a period of time to show off their work. I’ve noticed that the demo is one of the first things that agile teams drop from the standard Scrum ritual list and I’m glad to see the practice alive and well here.

After release, the team also monitors their applications. To do this, they use services from New Relic. It provides the team with a real-time view into what their customers are experiencing. The information presented was impressive and looked very nice. If you can get away with a third party monitoring your application performance, it’s certainly something to look into.

While the team certainly seems happy and successful, they are continually looking for ways to improve. They posted a list of some challenge areas right on the wall. One is the challenge of distributed team members. They recently hired a remote programmer and while they do store story information and progress electronically, that individual doesn’t get the benefit of the environment and the physical board and charts the team uses. They’ve been experimenting with a controllable camera, but it appears the jury is still out on that. I suggested they check out the work of Joe Moore from Pivotal Labs who writes a lot about tools and techniques for integrating remote team members.

I certainly would like to thank Andrew Fuqua (@andrewmfuqua) and Claire and the rest of the Daxko team for their openness and willingness to share their processes and ideas. I wish that I had been able to stay longer and discuss more things…perhaps next time.

Leave a comment

Levels of Feedback in the Agile Enterprise

Recently I gave a lightning talk at Agile Atlanta on this subject. Here’s the Prezi. Feel free to click through that as you read along to this post!

The idea is that there are several types of feedback that we get on a project (I discuss three) and they occur at different levels – those levels are related to the Scaled Agile Framework from Dean Leffingwell and others that have either independently conceived it, or have built upon it. But we’ll get back to that.

The first type of feedback is project related and it’s facilitated by the Agile process. The thesis here is that Agile and other “traditional” SDLC methodologies are all guiding the same type of work: we have requirements of some sort, we write code, we test it, etc. What Agile does, intentionally, is create frequent feedback points. Scrum, for instance, talks about daily scrums, sprint planning, release planning, sprint reviews, demos, retrospectives – all of these are specific, and prescribed, points of feedback that are built into the process. There’s nothing about a waterfall process that says you can’t inject these points of feedback, it’s just that they’re not built-in.

The second type of feedback I discuss is Testing. For this I trot out the Agile Testing Quadrants originated by Brian Marick and built upon by Lisa Crispin and Janet Gregory. There are plenty of sources out there to get more information about the model, so I won’t do that here. The point is that testing itself deliberately provides feedback, and the different approaches and techniques represented in the model give you a variety and a frequency of feedback not, again, prescribed in traditional methods.

When most people talk about Agile, they’re referring to happenings at the team-level; they talk about the meetings, and perhaps XP practices. There are other levels in the Enterprise, and those are providing feedback too. In the typical model, there’s the team level, the product level, and the portfolio level. Agile process is happening at all these levels, as is testing. The frequency and types of both changes, but it happens and it’s critical.

There is a third type of feedback highlighted here, and it involves the idea of “Control Gates.” Control Gates are those steps in the process that are demanded by policy, laws, or some form of structural practicality. If there is a failure at any of these points, the work that happened before is rejected and must be resubmitted. The thesis at this point is that this is not feedback. It comes too late to provide value over and above the amount of re-work it incurs.

The most important aspect of the discussion involves moving aspects of “Control Gates” sooner in the process. For instance, if we do Pen Testing last, after all other aspects of the product are completed, we’re not getting valuable feedback. It’s too late for that. If that test fails, all progress grinds to a halt and we have to go back and fix old code – and then go through the whole deployment process again. A better option is to treat these control gates as a formality – not as a source of feedback. We should be able to run the same tests for every build if we want, every check-in possibly. By the time we get to that control gate, we should have run the same tests hundreds of times already.

This is true for any type of control gate. For example, lots of projects have a formal Support Transition process involving training and documentation and sign-offs. Instead, it would make more sense to have a support representative actually on the team, learning along with the team about the project. By the time the hand-off occurs, all the learning has happened and it’s a smooth and fast transition. It would be too late at that point, for example, to learn that Support had no means of monitoring or troubleshooting the application.

5 Comments

Failing Fast

20120418-111031.jpgMy company has a Director of Innovation. Joe is, predictably, a pretty inspiring guy and often sends out emails with interesting thoughts and ideas to spur innovation. Recently he sent out a message talking about the benefits of Failing Fast. In it he referenced the Gossamer Condor, the first human-powered aircraft capable of flying in a controlled fashion for more than a mile. Paul MacCready won the contest in 1977, 20 years after the contest began. Why did it take so long? Well, the biggest challenge was that failure was too costly. The designs were often good, but cycles of assembling the materials, testing, repairing and retrying could take as long as a year. As the attempts wore on, it became more and more difficult to get them financed.
MacCready took a different approach. He knew that no matter how good his designs, the likely result would be failure, just like all the others before him. As a result, he expected failure and built his prototypes with that in mind. He use common and cheap materials that were easy to work with and easy to repair and replace. As a result he could test new designs in a matter of weeks rather than years.
So how do we apply a fail-first thinking in software development? Here are some approaches, techniques and models that are borne of this philosophy that can have effects at all levels of your organization:

Team Level:
Test Driven Development (TDD) –
This is an approach used by programmers at the lowest level of software development. The idea here is you write a test that will initially fail because the code hasn’t been written yet. Then you write code until it passes. Certainly this helps you fail fast, in fact, it encourages you to fail first! The true power, though, is in the freedom it gives developers to try things they otherwise wouldn’t for fear of failure. With a battery of tests already created, the programmer is free to experiment and try out more innovative designs because they know they have that safety net. Failures are reported instantaneously, as are successes.

Executable Specifications –
With this approach, analysts and designers are called upon to work with programmers and testers to create requirements which are written with examples that can be hooked to the code.  When the code is written to do what the business wants, these examples pass.  If it isn’t, the tests won’t pass.  Now that’s fast failure.  In the past, such problems (programmers and testers misunderstanding requirements) might not have gotten caught until much later in the process.   Of course, all this presumes the design or feature was correct.
If you’ve been doing this all along, a new feature might be proposed; the tests are written and the feature is coded and the tests pass, however an older test fails.  Just as in TDD, you now have a safety net of “living” documentation.  Before you had to pore through the requirements documents to find the inconsistencies; now all you need to do is run them.  Additionally, if done properly, these specifications become the documentation.  There’s no worry about them becoming out of date or inaccurate with respect to the code because they are directly tied to the code.

Project level:  
Short Iterations –
This is a tenet of Agile methodologies.   Teams complete releasable units of prioritized work in a short and predictable amount of time.  If we need to change course because of some unexpected event, we don’t have to scrap the foundations of work that won’t be complete for months, we only have to scrap, at most the amount of work related to the length of the iteration (or, for you Kanban folks, the cycle-time for the largest unit of work in process).

Small, more frequent releases –
In some situations, there are restrictions on how frequent and how small we can make our releases due to overhead (training, deployment, regulatory testing) but we should be working on making these as small as possible.  Just like with iterations, if we make the wrong call on a feature, we can react faster – instead of 6 months to a year to implement a change, we can potentially do it in an iteration’s time, with no disruption to the business as would be the case with emergency hotfixes/patches, etc.

Business level –
Feature flow and prioritization –
Too often businesses decide on goals unique to each business unit and then those units create projects independently to meet those goals.  Often these projects compete with each other for resources and capital when only parts of them are actually directed at the business goals.  We shouldn’t be focused on the projects, but rather the features in those projects that are most important.  Models exist that show how to link these goals to prioritized features.  The business units are now not concerned with independent projects, but rather they are working on those aspects of the features that they have the capability to produce.  The business units can then speak in terms of continuous delivery of features within products rather than entire projects released all at once.  This keeps us focused on what we believe are the most important things given the best information we have at that time.  If it’s wrong, we can adjust our backlog of features based on the new priorities. The alternative is to release updates based on information from a year or more ago and to cancel projects with nothing to show for them.

, , , , , ,

Leave a comment

The Embedded Tester

Hat and JacketI hear it a lot at conferences, work meetings, interviews, discussions with managers, etc., the idea of “embedded testers.” The utterances are along the lines of, “Our testers are embedded on development teams!” or “How do we embed our testers with our developers?” It unfailingly puts me in the mind of a reporter from say, CNN wearing an ill-fitting helmet and flak jacket*, standing there as well-drilled troops go about their business. These journalists are working alongside the military, sometimes getting into actual firefights and other dangerous situations, but they’re not part of the team. They are most certainly outsiders with a different agenda (although it doesn’t always work out that way – more on that later).

When we talk about embedding testers – when we use that language – we’re implying that we’re taking a member of a separate Quality Assurance group and dropping them into a team of programmers, much like dropping a reporter into a military unit and sending them to the front line. No wonder testers are apprehensive and no wonder developers are resentful. Just using the phrase suggests a cargo-cult mentality is at work; we show a misunderstanding of the reasons for doing it and whether they’ll even apply in our particular context. And as a manager, when you use this phrase, you’re thinking in the back (or perhaps the front) of your mind that, just like a news organization, you can simply pull that person out of the team and back into the testing pool and the team will be unaffected.

In Agile, testers aren’t “embedded” on teams any more than programmers are, or analysts, or any other role that is needed on the team. To say that they are suggests that this is an option, or a particular strategy you might employ to help with Agile development. It’s not! It’s an essential part of it!

And no, I’m not saying that all testing has to be done by the team. There are reasons why you might want, or be legally obligated, to have independent testers outside of the team. Additionally, you’ll want your users involved in evaluating your work.

So above I mentioned I would talk more about the “different agenda” of embedded journalists. The social science is pretty new on all of this, but it turns out that there’s a bit of Stockholm Syndrome involved. These people are together under quite stressful situations, and often, in spite of their roles as impartial providers of information, they become overly sympathetic to the point that they glorify the actions of the troops and neglect, or leave out truths related to the enemy combatants. In fact, administrations have counted on this, encouraging embedding in order to provide more support for the war in Iraq.

“Aha!” you say. “This is exactly what managers are afraid of! Testers fraternizing too closely with programmers to the point of hiding quality problems from management! You need that adversarial relationship between devs and testers!” Ok, stop. First of all, shame on you for drawing parallels between software development and war! (ahem) And Second, I *knew* you didn’t really want testers as part of the team! (pwned) Look, Agile teams are perfectly capable of testing their own stuff and pitting testers against programmers is a fast track for making testing irrelevant. You’ll need to establish trust and choose metrics that reflect the goal of the team. Then you’ll have a good team, and not just a bunch of people embedded together.

* It turns out that these days the military won’t spring for such gear – the reporter is responsible for their own. (Then again, maybe that contributes to the lack of fit.) However, it’s not like the military is saving much by doing this.

, , , , , , ,

4 Comments

Contract and Integration Service Testing in a Componentized Enterprise Model

Contract Testing

(Presented at Agile Atlanta on May 1st, 2012 – Prezi)

I’ve been working a lot over the past six months on creating an automation framework for our service development teams. In our relatively large organization, we have have teams devoted to different horizontal strata of software development. So, for good or ill, we have database teams, mainframe/API teams, service layer teams, and UI teams. If you’re at that service layer, and that’s all you do, you should be taking a different approach to testing than if you had to do all of the testing for vertical slices of product functionality. With that in mind, here’s a proposal for a presentation on the strategies and the automation framework we’ve come up with. I welcome any feedback or thoughts!

————————————————————–

How do we ensure appropriate testing and feedback for large Agile Enterprise projects where the value delivered may not be end-user facing features? Such is the case for many enterprises working to establish a Service Oriented Architecture. A lot of knowledge exists for agile teams delivering customer-facing products. Much of that work focuses on acceptance tests being written in clear business language that is understandable by the whole team and stakeholders. This approach becomes more complicated for component delivery teams developing Shared Services that are often seen in large Enterprise development organizations.

Services can be tested in a number of ways and the approach that’s best can vary greatly based on the context of the application and environment. The presentation discusses common patterns of testing services and focuses on an acceptance test solution appropriate for teams working on the shared service layer within a componentized Enterprise model. The solution is based on a “universal” fixture that works for any type of object-based service and is accessed via table-based acceptance tests in FitNesse.

Process/Mechanics
The talk will be of a standard presentation format with slides and discussion. The first part of the presentation provides background on testing in an Agile Enterprise model and the various approaches to Service testing. (15) The second part of the presentation discusses the creation of specific framework to aid in service contract verification that is helpful for specialized teams working in a componentized Enterprise development model. (15) The last part gives examples on basic usage and on how the framework is used in a real-world environment. (30)

Learning outcomes
Knowledge of how Agile in the Enterprise affects our approaches to testing and feedback
Understanding of various methods of service testing and how to choose what’s right for your team
Details and examples of how to employ a framework for testing services appropriate for the componentized Enterprise model.

, , ,

Leave a comment

Agile Allure

Through the wonderful Agile Atlanta, I got to visit another company’s location this week – Allure Global. They make dynamic signage for movie theaters – concessions and ticketing, etc. They are an Extreme Programming shop, which means they use the XP practices like TDD, collective code ownership, and others. They have two week iterations. They code in Java and use Jira as their project management tool. I took a couple of pictures.

This first picture of the workspace for one of their delivery teams.

20111212-190902.jpg

Note the monitors side by side at each station. That’s because they do pair programming, another of the XP practices. When they come in to work, the team members pair up and start working on the days tasks. They don’t keep the same pairs and anyone can sit at any workstation – all of the code is shared and can be worked on by anyone. Checkins are signed by both members of the pair.

The second picture shows their dashboard and the status of their continuous integration.

20111212-191005.jpg

They use Jenkins to manage the automatic build, test and deployment of code. The bottom screen is green because the build for that evening was successful. You can also see the number of tests (or checks) that were run. Those are the tests that run as part of the build and they run FAST. All 20 some-odd-thousand of those tests run in a matter of seconds. These tests use mocks and other test doubles which allow them to run fast, but also to check only the specific code addressed. Having a fast-running, comprehensive suite like that makes it so much easier to refactor and redesign because you’ll get immediate feedback if you’ve broken anything else in the process. It’s a great feeling!

The top screen shows the status of the automated functional and/or Acceptance tests – see zone two of the testing quadrants! These tests are more integrated and (as a result) slow and work at the story/feature level and are of more interest to the customer/product owner.

The top screen also rotates to other information radiators important to the team, like a burndown chart.

Much thanks to the friendly folks at Allure Global for sharing their environment and processes with me!

1 Comment

Dammit!

Blink 182 - Dammit

The following is a transcript from an actual IM conversation in the recent past…

Eric Jacobson: I have a question for your blog.

EJ: I just got my ass kicked trying to convince a project team to stop calling features done without the automation done.

ManageToTest: That’s illegal! Call HR!

EJ: I had like 10 people telling me I was wrong.

MtT: That’s interesting. So why is it wrong? Does it not need to be done? What will you do in the meantime? When will there be time to do it if not now? Why is time then “less valuable” than it is now?

EJ: Their bottom-line excuse is that automation is important, but not part of the “critical path.”

MtT: Oh really? Critical path for what? This particular feature?

EJ: They say, “Yeah, it would be nice to have it, but we can go to prod without it.”

MtT: Ah, yes, of course.

MtT: Sure you can put it into prod, but what about the next feature? When will you do automation for that? Oh, well, I guess you can put that off too.

MtT: So when will you do the automation from that first feature? Is there ever a time there aren’t any features?

MtT: What is the value of having this automation? If there is none, then they are correct. Don’t do it.

MtT: But perhaps we aren’t understanding what the value is.

MtT: The value should be the safety net of knowing that new functionality hasn’t broken existing functionality. Right now the only way to do that is by doing manual regression, and the time it takes to do that will increase over time.

MtT: So if you don’t automate, you’ll have to cut some of that manual regression out, creating risk.

MtT: That amount of risk is what you need to determine. That is the “value” of the automation.

EJ: …great stuff. I wish I would have said it in my meeting.

EJ: Dammit!

MtT: So you’re saying I should blog about this?

Leave a comment

The BEST Laid Plans

The Best Laid PlansHere’s a fresh question from the ManageToTest Mailbag!

Hey Mr. ManageToTest,  do you happen to have any standards or guides around Quality Control Test Plans?  We have a few different QC groups doing things slightly different ways….no formal documentation on which way is the “best” way, or best practices around how to write a test plan.

First off, there’s no “best” way to write a test plan.  Depending on the context, there will be more effective methods.  Also, you probably need to define what you mean by a test plan, and what it is you want to accomplish with it as the definitions and needs vary between contexts.

That being said, I can provide some examples of test planning based on my experience:

The first is the IEEE model which is fairly common within command and control environments or where such documentation is required by regulation or policy.

I find this method to be wasteful and often redundant.  Such plans are written and then never referred to again, and as projects change, these documents are rarely updated so they present an inaccurate record of what was actually done.

A search on your company’s document repository will probably yield several examples of such a document or template, so if you really want to use this style you probably already have it in practice somewhere in your organization.

In many cases, a checklist of capabilities and areas to test are appropriate.  This is an excellent way to lay the groundwork for a testing session or the creation of a set of test cases.  It is complementary to discussions of user stories when eliciting acceptance tests and during delivery story discussion when eliciting tasks and estimation of work.  Here’s a great article from James Whitaker (Director of Testing at Google) where he describes a method of creating and organizing such a plan.

Another method is Session Based Test Management (SBTM) created by James Bach.   A Charter is written that describes the goal of the testing session.  A list of Charters defines the overall plan for the testing effort (and some predictive value based on the number of charters).  Note that this method is a hybrid of a test plan and test cases.  The act of engaging in the testing session produces the test cases on-the-fly resulting in a record of what was actually tested.  The Charters completed represents your test coverage.

Any one of these ways may be effective in a particular context, or perhaps a combination of things from each of them would suit better.  And there are certainly other ways of generating and presenting test plans.  Again, I would stress that no method is the “best” across all contexts.

2 Comments

Roles and Responsibilities

20111021-135002.jpg

Me – “The team is responsible for Quality!”

Dev Mgr – “Yeah, I hear that, but what will help is to go one level deeper – let’s realistically identify what can and should be done by “developer” and “tester” resources.”

Lots of managers and their employees (and HR) struggle with this concept of team responsibility.

Clearly there will sometimes be tasks on stories that involve a level of coding skill that can only be done by someone with the title of Developer. There also will be some tasks that would be much more efficiently done by someone with years of testing experience. This doesn’t change the fact that when you commit to stories in a sprint, you’re expected to get those stories done. It doesn’t matter who is actually available to do the work.

Let me explain.

In the beginning of each sprint, the delivery team takes “Ready” stories from the backlog and works through them and creates tasks and estimates them. The team should know what their capacity is for that period of time. This means you should have an idea of how much time it will take to code and test and otherwise produce that work. All of it, not just the coding. If people are unavailable that sprint, clearly you should take on less work. If some of the test-related tasks don’t fit within the capacity of the testers you have on your team, you have a choice: either remove some work from the sprint and use the extra time for the programmers to do training and/or help with other projects outside of the team OR have other members of the team take on the remaining testing tasks.
If testing, or any other specialization on the team, is causing a bottleneck, the answer isn’t to shove more work into the bottle! What you do is work together to keep the flow going. If that means programmers doing more testing, fine. If that means testers doing more tasks that traditionally fall to programmers, fine. If that means analysts doing the same, or any other permutation, fine.

If something doesn’t get coded – that’s not a single programmer’s fault. The team should have known about it. If something doesn’t get tested, that’s not a tester’s fault! It’s the team’s fault for not making sure it got done. Blaming isn’t a productive method of getting things done. We succeed as a team and we fail as a team. And when we succeed, we reinforce and build upon the things we’re doing right. When we fail, we recognize the reasons for the failure and work to improve for the next sprint.

Let’s go back to the discussion – assume I said all that stuff above.

Dev Mgr – “Alright – so we come up with tasks together and then assign them to people – that will lock them down and we’ll be able to tell Project Management what resources we have free, or what resources we need to get the work done in this sprint.”

Me – /sigh

Note that making sure you have the capacity isn’t the same thing as assigning tasks to people. Tasks, optimally, should be pulled by people capable of the work only at the point in time that they can start working on them. It isn’t efficient to have a task assigned to someone who’s working on something else while another person could be doing that task. Now you may *know* simply because of the nature of the task that a particular individual has to do it. That’s fine. But be aware that’s risk you’re taking on. If nobody else is capable of that work, then you have to be very sure that particular individual has the capacity in that sprint to do that work. If you have a lot of work like that, it would be advisable to pair that person with someone to work on the tasks together. After a time, then you’ll have two people capable of doing that kind of work and you’ve lessened your risk!

Me – “So you’ve got it now, right? You’ve got a cross-functional team that self-organizes. You don’t need to assign tasks to anyone or wrangle with Project Management over time estimates and the availability of people.”

Dev Mgr – “Resources.”

Me – /sigh

4 Comments