Archive for category Automation

Principles of Agile Testing

A few weeks ago my company had an internal conference called “The Fiserv Leaders Conference on Testing” and I, along with some others, was tapped to do a talk on Agile and testing. We decided to write down what we feel are the principles of Agile testing (similar to the principles laid down in the Agile Manifesto). This isn’t what we ended up with, but this is my list and thus I will record it here.

Before I do, I should acknowledge this isn’t a new endeavor. Others have done this before. I particularly like this one from Karen Greaves and Sam Laing.  And I’ve certainly borrowed from these ideas in coming up with this stuff below.  I don’t think it breaks any ground, but this is where my head is currently at, and how I frame testing in my current organization.

Testing is a whole team activity:

Everyone is responsible for quality. When a story, or work item, is marked done, that means the team is agreeing that the work is done — the whole team, not just the testers. Everyone is accountable for it, thus everyone should be involved in testing. Anyone on the team can take on the “role” of tester, just as anyone on the team can program, or write a story. Working together is the best way to get the best quality work done.

Continuous feedback is essential:

Agile works primarily because of its built-in feedback loops that happen much more often than in waterfall or even in earlier iterative frameworks (Spiral, RUP). Scrum, for instance, has daily feedback via the daily Scrum. Then there’s feedback at every sprint boundary with the Review, Retrospective and Sprint Planning. And beyond that Release planning. When you include XP, and other common agile testing practices you create even more feedback loops on top of these. The Agile Testing Quadrants model expresses that quite well (and perhaps this does it even better). One thing to note is that “checkpoints”, “milestones” – things like that are NOT feedback. Those checks come too late (by definition). So, for example, organization mandated Control gates like sign offs, Pen testing, or Performance testing that must occur before a product is released are not valuable sources of feedback. They occur too late for us to react. Failures at these points almost always result in delays and overruns which decreases value. Check out this prezi for illustration.

Agile testing demands flexibility and the ability to respond to change:

Working on an agile team should not be like working on an assembly line. Products change, people change, environments change, organizations change, to expect one’s daily role or work to stay the same through all of that is unrealistic at best. A core principle of Agile is the ability to not only respond to change, but to be open to it. At a practical level, that means things like:

  • Changing the way you write tests to suit the needs of the team.
  • Adopting a new tool to solve a problem, or removing the use of a tool that’s no longer provides enough value to the team
  • Taking time from the sprint to learn and practice new techniques
  • Giving up some responsibilities or picking up new ones to improve the throughput of the team

Be a source of information:

Testers are not the assurers of quality (and honestly they never were). Testers provide information about how the product does, or doesn’t behave. And that is valuable! So provide that. Update stories with information about system behaviour. Be proactive with updates to the product owner. Continuously work with programmers to match up expectations with reality. Communicate with other teams and stakeholders who have questions about the system. And above all, have the courage to provide that information even if you think it won’t be well received!

Simplicity is a virtue:

When providing information, or writing tests, or executing tests, make it as simple as possible. Complexity is the enemy of information and should be avoided. When communicating about the system, don’t hide behind misleading metrics or test documentation that nobody reads. CYA is not a key tenet of Agile! When writing a test, remove useless or obfuscating information. For example, you shouldn’t need exhaustive steps – you are part of a team that knows your software. You’re no longer throwing it over a wall to a test team that has no experience or familiarity with the product. Execution should be as simple as possible. Most testing does not need to be integrated across services, products and platforms! You are usually testing a single behaviour within the confines of your system. If so, then make sure that’s exactly what you test – use test doubles (mocks, shims, stubs, etc.) as often as possible. Use tools or automation to make execution easier and faster if warranted. If automating, make it obvious what you are checking – name checks properly and make it easy to see the result matches the expectation. Design systems with testability in mind – make it easy (or at least possible!) to test the individual parts of the system.

3 Comments

Agile at Daxko

I had the opportunity to visit Daxko through Agile Atlanta and their Agile Tours. I had already heard of the company and their foray into agile through Claire Moss who is, by the way, an excellent world-renowned tester and local Atlantan. That’s why when I heard about the Tour, I knew I had to sign up.

The team we were introduced to was fairly typical. It’s a small team with a couple of programmers, a tester, and a UX designer. Additionally they have a design lead and a development lead, who along with the designer formed a Product Owner team. Now it’s not unusual to see a PO team these days, but I did think it interesting to see one on such a small team. I think it’s common for teams to be burdened a bit on “grooming” process; they’d rather be sprinting on current work than thinking about future plans. On the other hand, leaving the entire grooming process up to a single PO can be overwhelming.

The product owner team refines ideas until they’re ready to be worked on as stories for a given sprint.

This team appears very committed to customer experience. The fact that they have a dedicated UX designer who makes it a point to speak with and visit with customers on a regular basis is a testament to that. The designer showed how she spends time (along with input from the team) making exemplars of their customers through the use of personas. She was very quick to point out that these personas are evolving and can’t, by their very nature, be viewed as explicit representations of what all their customers are like, but it’s a helpful guide for what types of features and the type of presentation they should be shooting for in their products. This analysis along with frequent user-testing of concepts helps create ideas that the product owner team then grooms into actual stories that the team will work on. (Another point driven home by the designer was that she no longer has to create wireframes of interfaces ahead of time. If she wants to prove out a design, she creates it right within a browser. When working on stories, she can pair with developers and build out “real” UI components and shape workflows through system within the sprint! Shocking!) The image to the left shows their epic board (my term). They take these large, and somewhat unproven concepts and groom them, with increasing involvement from the team as they move to the right, until they are ready to be worked on within a sprint.

The team uses story mapping to help plan out their stories and schedule.

To determine what stories to work on and when, they use Story Mapping. In fact, they brought in Jeff Patton to help show them how to do it. The image on the right shows an example. That top row of orange cards represents the actions their users take, in the general order they are performed. This again is learned from the research done directly with customers. Then, the most essential parts are written on the yellow cards. That first row of yellow cards forms what’s commonly known as a “walking skeleton” or the basic functionality you’d need to provide business value. Then the team continued to add additional functionality further down on the map; the lower the importance, the lower they appear. The blue tape shows how the pieces of functionality should be released as a collection, so this board shows that there are likely to be three planned pushes of functionality.

A list of planned releases of functionality color-coded by confidence.

Which leads us to the Roadmap depicted to the left. It shows a listing of planned dates and the content that will be delivered on those dates. Note the color-coding. The items in green are dates they are fairly certain they can meet while as things go out further, the confidence decreases. Due to the nature of their business, making firm commitments even 5 months out is a dicey proposition since market needs and priorities can change quickly. Thus, even though the team feels pretty confident they could hit a date that far in the future, it doesn’t make sense to commit to it until they know more information. Seems like a decent implementation of Real Options to me. I got the impression that the rest of the company isn’t completely enamored of this practice. I can imagine that there may be some larger efforts that absolutely need to be delivered by a certain date for business reasons (say, fiscal year end, for instance). In those circumstances it’s not a question of when you deliver, but what. Now that’s a relatively basic agile concept of changing scope but fixing date. If everything is flexible, well, that’s great and I’ll be interested to see if they continue to have success doing things this way.

A basic Kanban-style board with some interesting wrinkles.

And that’s where we transition from the Product Owner team to the rest of the team, what they term the Product Team. On the right you can see how they track their progress. Much of it should be fairly obvious to anyone with experience with Kanban boards. the Green stickies on the left represent the stories, the yellow stickies represent tasks, the red stickies represent defects and the blue stickies represent acceptance tests. The yellow stickies get moved to the right as they go from “In Progress” to “Done.” There are a couple of differences from most boards that I’ve seen that make things interesting. First, note the little magnets connected to some of the tasks. Those represent team members. Each person gets three and you can have no more that on the board at any time. This keeps their WIP limit under control. Genius! Second, do you see those darts on the upper left of the image? They are used to target a note to let people know that this is something someone is stuck on and they need help. Basically it’s an indication that the team should treat this with urgency and collaborate on a response ahead of anything else. Third, note that there is a division a little past halfway on the board. The Acceptance tests are tracked as specific items on the board. Only if those acceptance tests are all in the Pass column can a story be considered done (even if all the tasks are in the Done column). And really, this is the true indication that you’ve completed some work – does it do what we need it to do? There is a growing trend of folks tracking acceptance tests rather than stories/tasks as the true indication of whether work is Done. I like this idea quite a bit. One of the product owners described it as the difference between Output and Outcome. You need to have output from the team, but the reason you do it is to provide the desired outcome, ultimately, that is what you want to track. Finally, look down at the left bottom of the picture. You see a set of stories that are “on deck.” These are groomed and ready to be worked on. If work gets done early in the sprint, or even if priorities change, one or more of these can easily be added to the board. This is another basic agile rule-of-thumb that you should have at least 2 sprints of work groomed and ready at all times.

Big visible information is available all throughout the Daxko team room.

It’s critical to be absolutely clear on what it means to your team to be “Done” with a story. I like how this team has not only defined this, but made it into a big sign and visible to anyone around. It’s also evident that this is not boilerplate but specific to this team (see the mention of browser types and resolutions) and that it’s evolving. The team was very explicit that they understand that any process or practice they have is only in use as long as it provides value to the team. If they deem it isn’t, they change it or remove it. One example is the burnup chart to the left. They mentioned that this was the first sprint that they were tracking in this way. I pointed out that a flat line for the first four days of the sprint was of dubious value. Perhaps smaller stories would be in order to lower the risk of not delivering anything in a sprint. Additionally, they’re also just starting to track cycle-time. They define it as the time they begin work on a story to the time it’s available for customer use (they’re also tracking the time at which a story is done). I’ll be interested to find out how that information will help them in the future. And speaking of completing stories, in the bottom left of the picture you can see the metal frame of a large bell. They ring that bell whenever a story is complete. There’s always room for a nice Pavlovian reward if it helps the team’s morale!

After each sprint, the team has a retrospective. They showed off a whiteboard from the last session that used the metaphor of the team as a ship with wind driving the team forward and anchors weighing the team down. Hey, whatever floats your boat! The ScrumMaster said that he changes up the metaphors and retro techniques to help keep things lively and fresh. The team also does demonstrations of their work to the greater company. Every other Tuesday, this team, and the other teams across the company, have a video conference where each team gets a period of time to show off their work. I’ve noticed that the demo is one of the first things that agile teams drop from the standard Scrum ritual list and I’m glad to see the practice alive and well here.

After release, the team also monitors their applications. To do this, they use services from New Relic. It provides the team with a real-time view into what their customers are experiencing. The information presented was impressive and looked very nice. If you can get away with a third party monitoring your application performance, it’s certainly something to look into.

While the team certainly seems happy and successful, they are continually looking for ways to improve. They posted a list of some challenge areas right on the wall. One is the challenge of distributed team members. They recently hired a remote programmer and while they do store story information and progress electronically, that individual doesn’t get the benefit of the environment and the physical board and charts the team uses. They’ve been experimenting with a controllable camera, but it appears the jury is still out on that. I suggested they check out the work of Joe Moore from Pivotal Labs who writes a lot about tools and techniques for integrating remote team members.

I certainly would like to thank Andrew Fuqua (@andrewmfuqua) and Claire and the rest of the Daxko team for their openness and willingness to share their processes and ideas. I wish that I had been able to stay longer and discuss more things…perhaps next time.

Leave a comment

Failing Fast

20120418-111031.jpgMy company has a Director of Innovation. Joe is, predictably, a pretty inspiring guy and often sends out emails with interesting thoughts and ideas to spur innovation. Recently he sent out a message talking about the benefits of Failing Fast. In it he referenced the Gossamer Condor, the first human-powered aircraft capable of flying in a controlled fashion for more than a mile. Paul MacCready won the contest in 1977, 20 years after the contest began. Why did it take so long? Well, the biggest challenge was that failure was too costly. The designs were often good, but cycles of assembling the materials, testing, repairing and retrying could take as long as a year. As the attempts wore on, it became more and more difficult to get them financed.
MacCready took a different approach. He knew that no matter how good his designs, the likely result would be failure, just like all the others before him. As a result, he expected failure and built his prototypes with that in mind. He use common and cheap materials that were easy to work with and easy to repair and replace. As a result he could test new designs in a matter of weeks rather than years.
So how do we apply a fail-first thinking in software development? Here are some approaches, techniques and models that are borne of this philosophy that can have effects at all levels of your organization:

Team Level:
Test Driven Development (TDD) –
This is an approach used by programmers at the lowest level of software development. The idea here is you write a test that will initially fail because the code hasn’t been written yet. Then you write code until it passes. Certainly this helps you fail fast, in fact, it encourages you to fail first! The true power, though, is in the freedom it gives developers to try things they otherwise wouldn’t for fear of failure. With a battery of tests already created, the programmer is free to experiment and try out more innovative designs because they know they have that safety net. Failures are reported instantaneously, as are successes.

Executable Specifications –
With this approach, analysts and designers are called upon to work with programmers and testers to create requirements which are written with examples that can be hooked to the code.  When the code is written to do what the business wants, these examples pass.  If it isn’t, the tests won’t pass.  Now that’s fast failure.  In the past, such problems (programmers and testers misunderstanding requirements) might not have gotten caught until much later in the process.   Of course, all this presumes the design or feature was correct.
If you’ve been doing this all along, a new feature might be proposed; the tests are written and the feature is coded and the tests pass, however an older test fails.  Just as in TDD, you now have a safety net of “living” documentation.  Before you had to pore through the requirements documents to find the inconsistencies; now all you need to do is run them.  Additionally, if done properly, these specifications become the documentation.  There’s no worry about them becoming out of date or inaccurate with respect to the code because they are directly tied to the code.

Project level:  
Short Iterations –
This is a tenet of Agile methodologies.   Teams complete releasable units of prioritized work in a short and predictable amount of time.  If we need to change course because of some unexpected event, we don’t have to scrap the foundations of work that won’t be complete for months, we only have to scrap, at most the amount of work related to the length of the iteration (or, for you Kanban folks, the cycle-time for the largest unit of work in process).

Small, more frequent releases –
In some situations, there are restrictions on how frequent and how small we can make our releases due to overhead (training, deployment, regulatory testing) but we should be working on making these as small as possible.  Just like with iterations, if we make the wrong call on a feature, we can react faster – instead of 6 months to a year to implement a change, we can potentially do it in an iteration’s time, with no disruption to the business as would be the case with emergency hotfixes/patches, etc.

Business level –
Feature flow and prioritization –
Too often businesses decide on goals unique to each business unit and then those units create projects independently to meet those goals.  Often these projects compete with each other for resources and capital when only parts of them are actually directed at the business goals.  We shouldn’t be focused on the projects, but rather the features in those projects that are most important.  Models exist that show how to link these goals to prioritized features.  The business units are now not concerned with independent projects, but rather they are working on those aspects of the features that they have the capability to produce.  The business units can then speak in terms of continuous delivery of features within products rather than entire projects released all at once.  This keeps us focused on what we believe are the most important things given the best information we have at that time.  If it’s wrong, we can adjust our backlog of features based on the new priorities. The alternative is to release updates based on information from a year or more ago and to cancel projects with nothing to show for them.

, , , , , ,

Leave a comment

Contract and Integration Service Testing in a Componentized Enterprise Model

Contract Testing

(Presented at Agile Atlanta on May 1st, 2012 – Prezi)

I’ve been working a lot over the past six months on creating an automation framework for our service development teams. In our relatively large organization, we have have teams devoted to different horizontal strata of software development. So, for good or ill, we have database teams, mainframe/API teams, service layer teams, and UI teams. If you’re at that service layer, and that’s all you do, you should be taking a different approach to testing than if you had to do all of the testing for vertical slices of product functionality. With that in mind, here’s a proposal for a presentation on the strategies and the automation framework we’ve come up with. I welcome any feedback or thoughts!

————————————————————–

How do we ensure appropriate testing and feedback for large Agile Enterprise projects where the value delivered may not be end-user facing features? Such is the case for many enterprises working to establish a Service Oriented Architecture. A lot of knowledge exists for agile teams delivering customer-facing products. Much of that work focuses on acceptance tests being written in clear business language that is understandable by the whole team and stakeholders. This approach becomes more complicated for component delivery teams developing Shared Services that are often seen in large Enterprise development organizations.

Services can be tested in a number of ways and the approach that’s best can vary greatly based on the context of the application and environment. The presentation discusses common patterns of testing services and focuses on an acceptance test solution appropriate for teams working on the shared service layer within a componentized Enterprise model. The solution is based on a “universal” fixture that works for any type of object-based service and is accessed via table-based acceptance tests in FitNesse.

Process/Mechanics
The talk will be of a standard presentation format with slides and discussion. The first part of the presentation provides background on testing in an Agile Enterprise model and the various approaches to Service testing. (15) The second part of the presentation discusses the creation of specific framework to aid in service contract verification that is helpful for specialized teams working in a componentized Enterprise development model. (15) The last part gives examples on basic usage and on how the framework is used in a real-world environment. (30)

Learning outcomes
Knowledge of how Agile in the Enterprise affects our approaches to testing and feedback
Understanding of various methods of service testing and how to choose what’s right for your team
Details and examples of how to employ a framework for testing services appropriate for the componentized Enterprise model.

, , ,

Leave a comment

Agile Allure

Through the wonderful Agile Atlanta, I got to visit another company’s location this week – Allure Global. They make dynamic signage for movie theaters – concessions and ticketing, etc. They are an Extreme Programming shop, which means they use the XP practices like TDD, collective code ownership, and others. They have two week iterations. They code in Java and use Jira as their project management tool. I took a couple of pictures.

This first picture of the workspace for one of their delivery teams.

20111212-190902.jpg

Note the monitors side by side at each station. That’s because they do pair programming, another of the XP practices. When they come in to work, the team members pair up and start working on the days tasks. They don’t keep the same pairs and anyone can sit at any workstation – all of the code is shared and can be worked on by anyone. Checkins are signed by both members of the pair.

The second picture shows their dashboard and the status of their continuous integration.

20111212-191005.jpg

They use Jenkins to manage the automatic build, test and deployment of code. The bottom screen is green because the build for that evening was successful. You can also see the number of tests (or checks) that were run. Those are the tests that run as part of the build and they run FAST. All 20 some-odd-thousand of those tests run in a matter of seconds. These tests use mocks and other test doubles which allow them to run fast, but also to check only the specific code addressed. Having a fast-running, comprehensive suite like that makes it so much easier to refactor and redesign because you’ll get immediate feedback if you’ve broken anything else in the process. It’s a great feeling!

The top screen shows the status of the automated functional and/or Acceptance tests – see zone two of the testing quadrants! These tests are more integrated and (as a result) slow and work at the story/feature level and are of more interest to the customer/product owner.

The top screen also rotates to other information radiators important to the team, like a burndown chart.

Much thanks to the friendly folks at Allure Global for sharing their environment and processes with me!

1 Comment

Dammit!

Blink 182 - Dammit

The following is a transcript from an actual IM conversation in the recent past…

Eric Jacobson: I have a question for your blog.

EJ: I just got my ass kicked trying to convince a project team to stop calling features done without the automation done.

ManageToTest: That’s illegal! Call HR!

EJ: I had like 10 people telling me I was wrong.

MtT: That’s interesting. So why is it wrong? Does it not need to be done? What will you do in the meantime? When will there be time to do it if not now? Why is time then “less valuable” than it is now?

EJ: Their bottom-line excuse is that automation is important, but not part of the “critical path.”

MtT: Oh really? Critical path for what? This particular feature?

EJ: They say, “Yeah, it would be nice to have it, but we can go to prod without it.”

MtT: Ah, yes, of course.

MtT: Sure you can put it into prod, but what about the next feature? When will you do automation for that? Oh, well, I guess you can put that off too.

MtT: So when will you do the automation from that first feature? Is there ever a time there aren’t any features?

MtT: What is the value of having this automation? If there is none, then they are correct. Don’t do it.

MtT: But perhaps we aren’t understanding what the value is.

MtT: The value should be the safety net of knowing that new functionality hasn’t broken existing functionality. Right now the only way to do that is by doing manual regression, and the time it takes to do that will increase over time.

MtT: So if you don’t automate, you’ll have to cut some of that manual regression out, creating risk.

MtT: That amount of risk is what you need to determine. That is the “value” of the automation.

EJ: …great stuff. I wish I would have said it in my meeting.

EJ: Dammit!

MtT: So you’re saying I should blog about this?

Leave a comment