Archive for category Management
I had the opportunity to visit Daxko through Agile Atlanta and their Agile Tours. I had already heard of the company and their foray into agile through Claire Moss who is, by the way, an excellent world-renowned tester and local Atlantan. That’s why when I heard about the Tour, I knew I had to sign up.
The team we were introduced to was fairly typical. It’s a small team with a couple of programmers, a tester, and a UX designer. Additionally they have a design lead and a development lead, who along with the designer formed a Product Owner team. Now it’s not unusual to see a PO team these days, but I did think it interesting to see one on such a small team. I think it’s common for teams to be burdened a bit on “grooming” process; they’d rather be sprinting on current work than thinking about future plans. On the other hand, leaving the entire grooming process up to a single PO can be overwhelming.
This team appears very committed to customer experience. The fact that they have a dedicated UX designer who makes it a point to speak with and visit with customers on a regular basis is a testament to that. The designer showed how she spends time (along with input from the team) making exemplars of their customers through the use of personas. She was very quick to point out that these personas are evolving and can’t, by their very nature, be viewed as explicit representations of what all their customers are like, but it’s a helpful guide for what types of features and the type of presentation they should be shooting for in their products. This analysis along with frequent user-testing of concepts helps create ideas that the product owner team then grooms into actual stories that the team will work on. (Another point driven home by the designer was that she no longer has to create wireframes of interfaces ahead of time. If she wants to prove out a design, she creates it right within a browser. When working on stories, she can pair with developers and build out “real” UI components and shape workflows through system within the sprint! Shocking!) The image to the left shows their epic board (my term). They take these large, and somewhat unproven concepts and groom them, with increasing involvement from the team as they move to the right, until they are ready to be worked on within a sprint.
To determine what stories to work on and when, they use Story Mapping. In fact, they brought in Jeff Patton to help show them how to do it. The image on the right shows an example. That top row of orange cards represents the actions their users take, in the general order they are performed. This again is learned from the research done directly with customers. Then, the most essential parts are written on the yellow cards. That first row of yellow cards forms what’s commonly known as a “walking skeleton” or the basic functionality you’d need to provide business value. Then the team continued to add additional functionality further down on the map; the lower the importance, the lower they appear. The blue tape shows how the pieces of functionality should be released as a collection, so this board shows that there are likely to be three planned pushes of functionality.
Which leads us to the Roadmap depicted to the left. It shows a listing of planned dates and the content that will be delivered on those dates. Note the color-coding. The items in green are dates they are fairly certain they can meet while as things go out further, the confidence decreases. Due to the nature of their business, making firm commitments even 5 months out is a dicey proposition since market needs and priorities can change quickly. Thus, even though the team feels pretty confident they could hit a date that far in the future, it doesn’t make sense to commit to it until they know more information. Seems like a decent implementation of Real Options to me. I got the impression that the rest of the company isn’t completely enamored of this practice. I can imagine that there may be some larger efforts that absolutely need to be delivered by a certain date for business reasons (say, fiscal year end, for instance). In those circumstances it’s not a question of when you deliver, but what. Now that’s a relatively basic agile concept of changing scope but fixing date. If everything is flexible, well, that’s great and I’ll be interested to see if they continue to have success doing things this way.
And that’s where we transition from the Product Owner team to the rest of the team, what they term the Product Team. On the right you can see how they track their progress. Much of it should be fairly obvious to anyone with experience with Kanban boards. the Green stickies on the left represent the stories, the yellow stickies represent tasks, the red stickies represent defects and the blue stickies represent acceptance tests. The yellow stickies get moved to the right as they go from “In Progress” to “Done.” There are a couple of differences from most boards that I’ve seen that make things interesting. First, note the little magnets connected to some of the tasks. Those represent team members. Each person gets three and you can have no more that on the board at any time. This keeps their WIP limit under control. Genius! Second, do you see those darts on the upper left of the image? They are used to target a note to let people know that this is something someone is stuck on and they need help. Basically it’s an indication that the team should treat this with urgency and collaborate on a response ahead of anything else. Third, note that there is a division a little past halfway on the board. The Acceptance tests are tracked as specific items on the board. Only if those acceptance tests are all in the Pass column can a story be considered done (even if all the tasks are in the Done column). And really, this is the true indication that you’ve completed some work – does it do what we need it to do? There is a growing trend of folks tracking acceptance tests rather than stories/tasks as the true indication of whether work is Done. I like this idea quite a bit. One of the product owners described it as the difference between Output and Outcome. You need to have output from the team, but the reason you do it is to provide the desired outcome, ultimately, that is what you want to track. Finally, look down at the left bottom of the picture. You see a set of stories that are “on deck.” These are groomed and ready to be worked on. If work gets done early in the sprint, or even if priorities change, one or more of these can easily be added to the board. This is another basic agile rule-of-thumb that you should have at least 2 sprints of work groomed and ready at all times.
It’s critical to be absolutely clear on what it means to your team to be “Done” with a story. I like how this team has not only defined this, but made it into a big sign and visible to anyone around. It’s also evident that this is not boilerplate but specific to this team (see the mention of browser types and resolutions) and that it’s evolving. The team was very explicit that they understand that any process or practice they have is only in use as long as it provides value to the team. If they deem it isn’t, they change it or remove it. One example is the burnup chart to the left. They mentioned that this was the first sprint that they were tracking in this way. I pointed out that a flat line for the first four days of the sprint was of dubious value. Perhaps smaller stories would be in order to lower the risk of not delivering anything in a sprint. Additionally, they’re also just starting to track cycle-time. They define it as the time they begin work on a story to the time it’s available for customer use (they’re also tracking the time at which a story is done). I’ll be interested to find out how that information will help them in the future. And speaking of completing stories, in the bottom left of the picture you can see the metal frame of a large bell. They ring that bell whenever a story is complete. There’s always room for a nice Pavlovian reward if it helps the team’s morale!
After each sprint, the team has a retrospective. They showed off a whiteboard from the last session that used the metaphor of the team as a ship with wind driving the team forward and anchors weighing the team down. Hey, whatever floats your boat! The ScrumMaster said that he changes up the metaphors and retro techniques to help keep things lively and fresh. The team also does demonstrations of their work to the greater company. Every other Tuesday, this team, and the other teams across the company, have a video conference where each team gets a period of time to show off their work. I’ve noticed that the demo is one of the first things that agile teams drop from the standard Scrum ritual list and I’m glad to see the practice alive and well here.
After release, the team also monitors their applications. To do this, they use services from New Relic. It provides the team with a real-time view into what their customers are experiencing. The information presented was impressive and looked very nice. If you can get away with a third party monitoring your application performance, it’s certainly something to look into.
While the team certainly seems happy and successful, they are continually looking for ways to improve. They posted a list of some challenge areas right on the wall. One is the challenge of distributed team members. They recently hired a remote programmer and while they do store story information and progress electronically, that individual doesn’t get the benefit of the environment and the physical board and charts the team uses. They’ve been experimenting with a controllable camera, but it appears the jury is still out on that. I suggested they check out the work of Joe Moore from Pivotal Labs who writes a lot about tools and techniques for integrating remote team members.
I certainly would like to thank Andrew Fuqua (@andrewmfuqua) and Claire and the rest of the Daxko team for their openness and willingness to share their processes and ideas. I wish that I had been able to stay longer and discuss more things…perhaps next time.
My company has a Director of Innovation. Joe is, predictably, a pretty inspiring guy and often sends out emails with interesting thoughts and ideas to spur innovation. Recently he sent out a message talking about the benefits of Failing Fast. In it he referenced the Gossamer Condor, the first human-powered aircraft capable of flying in a controlled fashion for more than a mile. Paul MacCready won the contest in 1977, 20 years after the contest began. Why did it take so long? Well, the biggest challenge was that failure was too costly. The designs were often good, but cycles of assembling the materials, testing, repairing and retrying could take as long as a year. As the attempts wore on, it became more and more difficult to get them financed.
MacCready took a different approach. He knew that no matter how good his designs, the likely result would be failure, just like all the others before him. As a result, he expected failure and built his prototypes with that in mind. He use common and cheap materials that were easy to work with and easy to repair and replace. As a result he could test new designs in a matter of weeks rather than years.
So how do we apply a fail-first thinking in software development? Here are some approaches, techniques and models that are borne of this philosophy that can have effects at all levels of your organization:
Test Driven Development (TDD) –
This is an approach used by programmers at the lowest level of software development. The idea here is you write a test that will initially fail because the code hasn’t been written yet. Then you write code until it passes. Certainly this helps you fail fast, in fact, it encourages you to fail first! The true power, though, is in the freedom it gives developers to try things they otherwise wouldn’t for fear of failure. With a battery of tests already created, the programmer is free to experiment and try out more innovative designs because they know they have that safety net. Failures are reported instantaneously, as are successes.
Executable Specifications –
With this approach, analysts and designers are called upon to work with programmers and testers to create requirements which are written with examples that can be hooked to the code. When the code is written to do what the business wants, these examples pass. If it isn’t, the tests won’t pass. Now that’s fast failure. In the past, such problems (programmers and testers misunderstanding requirements) might not have gotten caught until much later in the process. Of course, all this presumes the design or feature was correct.
If you’ve been doing this all along, a new feature might be proposed; the tests are written and the feature is coded and the tests pass, however an older test fails. Just as in TDD, you now have a safety net of “living” documentation. Before you had to pore through the requirements documents to find the inconsistencies; now all you need to do is run them. Additionally, if done properly, these specifications become the documentation. There’s no worry about them becoming out of date or inaccurate with respect to the code because they are directly tied to the code.
Short Iterations –
This is a tenet of Agile methodologies. Teams complete releasable units of prioritized work in a short and predictable amount of time. If we need to change course because of some unexpected event, we don’t have to scrap the foundations of work that won’t be complete for months, we only have to scrap, at most the amount of work related to the length of the iteration (or, for you Kanban folks, the cycle-time for the largest unit of work in process).
Small, more frequent releases –
In some situations, there are restrictions on how frequent and how small we can make our releases due to overhead (training, deployment, regulatory testing) but we should be working on making these as small as possible. Just like with iterations, if we make the wrong call on a feature, we can react faster – instead of 6 months to a year to implement a change, we can potentially do it in an iteration’s time, with no disruption to the business as would be the case with emergency hotfixes/patches, etc.
Business level –
Feature flow and prioritization –
Too often businesses decide on goals unique to each business unit and then those units create projects independently to meet those goals. Often these projects compete with each other for resources and capital when only parts of them are actually directed at the business goals. We shouldn’t be focused on the projects, but rather the features in those projects that are most important. Models exist that show how to link these goals to prioritized features. The business units are now not concerned with independent projects, but rather they are working on those aspects of the features that they have the capability to produce. The business units can then speak in terms of continuous delivery of features within products rather than entire projects released all at once. This keeps us focused on what we believe are the most important things given the best information we have at that time. If it’s wrong, we can adjust our backlog of features based on the new priorities. The alternative is to release updates based on information from a year or more ago and to cancel projects with nothing to show for them.
I figured it might be fun to continue this one-sided discussion about Productivity, so here we go! For lots of managers, there’s a holy grail – the idea of 100% efficiency – which, as we all know from the 2nd Law of Thermodynamics is impossible! (What? That’s just for heat engines? oh.) No, the real reason it’s impossible is because it’s unsustainable. At some point, variation will enter the system which will cause disruptions to that 100% efficiency. If you’ve ever done the “airplane game” at some agile training workshop or other you’ll know the concept. If not, think of a traffic jam. If a highway is full of cars, adding more cars won’t speed up the traffic; in fact, it’s the opposite. Adding more cars will slow it down even more.
For example, take a machine that can stamp out 60 widgets in a minute. It takes three inputs that must be entered into the system simultaneously. If any one of those inputs varies by a little bit, the machine will fail and the new inputs will start piling up behind. And then you have to stop the line, and fix whatever the problem is. Maybe it’s a quick fix, seconds even, but you’ve messed up the efficiency and you’ve created defects.
The fix is to introduce buffers into the system. By slowing the system down, problems caused by normal levels of variance will not reduce the efficiency of the system. This is kind of the same idea that created those ramp meters at the highway on-ramps.
So how does this relate to Software Development? If we try to complete more work in a given timebox than the team is capable of completing, the effort expended in trying to complete that work will actually make the team less productive, not more. Putting an amount of work that is lower than (imagined) capacity will increase the likelihood that it all gets done regardless of normal levels of variance, creating a more predictable and stable cadence. This will be more efficient than stuffing the iteration with as much work as possible. This is why Kanban has WIP limits. It creates that buffer to help ward off the effects of variance in the system.
I hold a weekly meeting to discuss all things quality and iterative development. This week, we tried to define Productivity. We came to the conclusion that the way we want to measure productivity in the software world is different than in the manufacturing world. People aren’t machines or even assembly lines. Simply being “up and running” doesn’t mean we are being productive. One common example is the manager walking around to see if people are surfing the Internet instead of being “productive.” Someone shared an anecdote of a dishwasher that finished early, but rather than incur the wrath of the boss for sitting around doing nothing decided to dirty up the dishes and wash them again. That’s not the kind of behavior we want to encourage.
We discussed why using # of Story Points/Ideal Days (i.e. Velocity) to measure productivity is a bad idea:
- It’s not fungible , i.e. points aren’t transferrable between teams or across projects. So if team A has a 20 point velocity and team B has a 40 point velocity it doesn’t mean B is 2x productive as A. In fact, it doesn’t even mean they are *more* productive.
- Once you start rating a team on productivity based on points, guess what? The team will start showing a dramatic increase in Velocity!
So what’s a better measure? You could simple count the number of actual stories you output over time. That could work, but it’s also susceptible to gaming. (Although, something that encourages lowering story size is probably a good feedback loop to employ!) Cycle time is also a metric you might use. This measures the time from when a story gets into the iteration to when it is marked Done. If your average cycle time is longer than a normal iteration, well that’s an indication of low productivity!
None of these are perfect. That’s why it’s also a bad idea to rely on one metric. Use a combination of tools to focus on areas of concern and improve them. If velocity is going up, but cycle time is staying the same, there may be some problems with story estimation – maybe one voice has started to dominate the process. If output is up, but velocity is down, there may be a problem with prioritization (we’re only working on easy stories).
We also discussed Throughput as a concept, being the result of Sales – Cost of Raw Materials and then Productivity = Throughput/Operating Costs. This comes from Throughput Accounting and it isn’t often used in a software context. I think it’s fine to couch Productivity in terms of $; it’s certainly fungible. However, again, software generally isn’t sold in units that can be created many times a day! You’ll need to figure out productivity during the project in some fashion before it even gets put into active use. Still, this is all the more reason to have small features that can be put into use as soon as possible.
Through the wonderful Agile Atlanta, I got to visit another company’s location this week – Allure Global. They make dynamic signage for movie theaters – concessions and ticketing, etc. They are an Extreme Programming shop, which means they use the XP practices like TDD, collective code ownership, and others. They have two week iterations. They code in Java and use Jira as their project management tool. I took a couple of pictures.
This first picture of the workspace for one of their delivery teams.
Note the monitors side by side at each station. That’s because they do pair programming, another of the XP practices. When they come in to work, the team members pair up and start working on the days tasks. They don’t keep the same pairs and anyone can sit at any workstation – all of the code is shared and can be worked on by anyone. Checkins are signed by both members of the pair.
The second picture shows their dashboard and the status of their continuous integration.
They use Jenkins to manage the automatic build, test and deployment of code. The bottom screen is green because the build for that evening was successful. You can also see the number of tests (or checks) that were run. Those are the tests that run as part of the build and they run FAST. All 20 some-odd-thousand of those tests run in a matter of seconds. These tests use mocks and other test doubles which allow them to run fast, but also to check only the specific code addressed. Having a fast-running, comprehensive suite like that makes it so much easier to refactor and redesign because you’ll get immediate feedback if you’ve broken anything else in the process. It’s a great feeling!
The top screen shows the status of the automated functional and/or Acceptance tests – see zone two of the testing quadrants! These tests are more integrated and (as a result) slow and work at the story/feature level and are of more interest to the customer/product owner.
The top screen also rotates to other information radiators important to the team, like a burndown chart.
Much thanks to the friendly folks at Allure Global for sharing their environment and processes with me!
Hey Mr. ManageToTest, do you happen to have any standards or guides around Quality Control Test Plans? We have a few different QC groups doing things slightly different ways….no formal documentation on which way is the “best” way, or best practices around how to write a test plan.
First off, there’s no “best” way to write a test plan. Depending on the context, there will be more effective methods. Also, you probably need to define what you mean by a test plan, and what it is you want to accomplish with it as the definitions and needs vary between contexts.
That being said, I can provide some examples of test planning based on my experience:
I find this method to be wasteful and often redundant. Such plans are written and then never referred to again, and as projects change, these documents are rarely updated so they present an inaccurate record of what was actually done.
A search on your company’s document repository will probably yield several examples of such a document or template, so if you really want to use this style you probably already have it in practice somewhere in your organization.
In many cases, a checklist of capabilities and areas to test are appropriate. This is an excellent way to lay the groundwork for a testing session or the creation of a set of test cases. It is complementary to discussions of user stories when eliciting acceptance tests and during delivery story discussion when eliciting tasks and estimation of work. Here’s a great article from James Whitaker (Director of Testing at Google) where he describes a method of creating and organizing such a plan.
Another method is Session Based Test Management (SBTM) created by James Bach. A Charter is written that describes the goal of the testing session. A list of Charters defines the overall plan for the testing effort (and some predictive value based on the number of charters). Note that this method is a hybrid of a test plan and test cases. The act of engaging in the testing session produces the test cases on-the-fly resulting in a record of what was actually tested. The Charters completed represents your test coverage.
Any one of these ways may be effective in a particular context, or perhaps a combination of things from each of them would suit better. And there are certainly other ways of generating and presenting test plans. Again, I would stress that no method is the “best” across all contexts.