Showing posts sorted by relevance for query How to Reduce the Cost of Software Testing. Sort by date Show all posts
Showing posts sorted by relevance for query How to Reduce the Cost of Software Testing. Sort by date Show all posts

Thursday, September 29, 2011

BOOK CLUB: How to Reduce the Cost of Software Testing (2/21)

For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?

Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry. Today's entry deals with Chapter 1.

Chapter 1: Is This the Right Question? by Matt Heusser

There's no question that the simplest and easiest way to limit the costs of testing is simply to not do it. Problem solved. Only it's not solved, because believe it or not, problems will still exist. So Matt asks us up front, when we talk about reducing costs, are we really talking about cost reduction... or are we really asking "How can we increase the VALUE of software testing?"

There are lots of ways to cut costs, not just in software testing, but in every part of the organization. Benefts are expensive, so are salaries. Cut those and you save *lots* of money... of course, you are also likely to lose your best people, too. So that's a false economy. We could break down work into very simple, repeatable steps. This has the benefit of wringing the best "value" out of the costs needed. Once again, though, it's a false economy, for two reasons, and one that I think Matt touches on, but I'll add my own take on this. First, there is no way in software to make a true "factory or millwork" comparison. Software is not a widget. What I mean is that we do not create a single component like you would at a factory to make a cylinder casing for an engine. So breaking everything down into simple components won't work in this fashion, because there are so many permutations and variables that it's impossible to cover them all. Second, to borrow from Seth Godin's "Linchpin", if you could structure all of the work in this manner, then anyone could do it, and anyone could be plugged in and pulled out. It's a race to the bottom. To put it more succinctly, it could be a factory job... but do you really want that?

So the short answer is, we are not asking the right question if we are just asking "how do we reduce the costs of software testing?" It's an important question. It's just not the only question. Value has to be considered. The biggest problem with "Value" is that it's really fuzzy. It's very subjective, and there's no magic number that says "OK, now that's VALUE!" Think of the things that generate "value" in an organization. The clasic example is training. Is it valuable? How can you tell? How much is enough? At what point is there a law of diminishing returns? Is there such a  thing as too much training? We would instinctively say "well, of course there isn't", but how can you truly quantify that?


In the software testing world, we look at "test cases" as a solidly quantifiable metric. The more test cases you have, the better your testing will be, right? Not so fast. I could tell you that my automated test routine has 1,000 test cases. Wow, 1,000 cases. That's a lot. But do those 1,000 test cases actually really mean I am doing better testing just by having them? Of course not, you don't have any context into why or what I'm testing. That's why my saing that, out of 1,000 test cases, 995 of them passed sounds great, until I tell you that one of them is relate to the fact that the app can't send emails. That can be a ctastrophic failure if you're testing a CRM system, but you won't know that by my just quoting you a number.

So how can we use these value ideas to help steer the conversation when those in the driver's seat are all about controlling costs?

The key is that we need to be able to do a number of things at important times. Writing and using tests as examples of the requirements to help make sure the requirements are clear is the first step. Finding the most important issues early and quickly is also important. Giving good and timely information to the development and management teams so that important decisions can be made.

A few days ago, a developer that I worked with at a previous company wrote to me and mentioned something I told him while I was testing with him a few years back. He asked me why I was able to get so many bug reports early in the process. I told him that one of my "principal weapons" in the battle of software testing came from James Whittaker (who may have taken it from somewhere else, I don't know really) but that I found to be one of the most valuable first salvos on an application... look for every error message in the code, and do what you can to make those error messages appear at least once. For those familiar with the book "How to Break Software", you will recognize this as "Attack #1". The message that I got back from this developer was that that tip alone, while he was working on his most recent project, helped eliminate about 50% of the bugs that the project would have had by that point. I thought it was cool of him to write me and tell me about that. Point being, that's a simple approach of a "big bang for your buck" early testing strategy and technique that you can use starting right now :).


The ability to provide good information so that the development or executive team can make a well informed decision is really the #1 thing that testers provide, at least in my opinion. If we have any chance of really making an impact, and a dramatic one, that's where testers can make the greatest substantive changes and add tremendous value. From test reports, to meeting status, and to ship/no-ship decisions, the tester has a unique role and responsibility. To borrow from Jon Bach, testers have more kinship with journalists than with any other profession. Therefore, the "story" or narrative of the project and its fitness is one of the key deliverables of the test team. How well does your team tell its story? The story? Do you approach your testing with the intensity of a beat reporter? If not, you may want to consider it.

Finally, to up the value and reduce the costs, one of the best ways to help that process is to eliminate waste wherever possible. There are areas that are beyond our control (status meetings, email, etc. may be a mandatory part of the jobs we do) but there are ways to get more bang for the buck in what we do. One great way is to approach testing from a Session Based model. Instead of saying "I tested this functionality" show that you have completed "x" number of testing sessions (of focused time) associated with a key piece of functionality... and tell your story.

Next installment will cover chapter 2.

Friday, October 14, 2011

BOOK CLUB: How to Reduce the Cost of Software Testing (16/21)

For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?

Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry.

We are now into Section 3, which is sub-titled "How Do We Do It?". As you might guess, the book's topic mix makes a change yet again. We have defined the problem. We've discussed what we can do about it. Now let's get into the nuts and bolts of things we can do, here and now. This part covers Chapter 15.

Chapter 15: Clean Test: Suggestions for Reducing Costs by Increasing Test Craftsmanship by Curtis Stuehrenberg

Curtis opens up this chapter with the idea that we can learn a thing or two (or more) from the Software Craftsmanship movement. Software craftsmanship is the idea of an experienced coder developing their skills and techniques over years of study and practice. Curtis expands this description by saying:

“A software craftsman is a skilled artisan apparently able to balance immediate pragmatism with a longer term focus on reducing the amount of work they or (more likely someone else) must do tomorrow. The craftsman coder knows a small investment in time today can save days or weeks later on, but more importantly they’ve developed a sense of which small investments will reap the greatest rewards.”

Software Craftsmanship emphasizes:
  • Not only working software, but also well-crafted software
  • Not only responding to change, but also steadily adding value
  • Not only individuals and interactions, but also a community of professionals
  • Not only customer collaboration, but also productive partnerships
That is, in pursuit of the items on the left we have found the items on the right to be indispensable.

Notice that the above highlighted areas are not specific to software developers. These ideas and ideals apply just as much to testers as they do to developers. Having to fix someone else’s code, or even your own code, over and over, prompted the inspiration and the growth for the “Clean Code” movement. The idea of Red-Green-Refactor is part of the view and ideal of “always leave the campground cleaner than they found it.” The ideas behind Software Craftsmanship are appealing to many developers. They are an underpinning of the Agile movement. Organizations world over are trying these ideas out and making them a core part of their work. Yet where are the testers in this paradigm shift?

Craftsmanship should matter to testers every bit as much as it should matters to developers. There are numerous benefits to well crafted tests; less debugging, maintenance, and rewriting. The function of a software tester is to communicate the experienced behavior software when compared against the expected behavior at a specific point in time.

That’s it.

The tester’s primary role is communicating. Information provided by software testing are reports showing the behavior of a product compared to its expected behavior at the time it’s observed. This information is then used to inform decisions with regards to management and budgeting of the project. This information ultimately helps the decision making process on whether or not to release a product to the customer. A good test plan provides information on what the team thinks is valuable and what constitutes a risk to a project.
Test cases are a conversation the test team has with itself. The danger is that we often fall into a form of short hand to describe what we do and create test cases that satisfy document standards rather than realistic testing needs. Bad test cases often come from being rushed, needing to fill details for documentation or compliance needs, and bad test cases don’t just fade away. They have to be deliberately removed or reworked in most cases.

We can look at the cost of performing an activity and the cost of not performing an activity. This is referred to as “Activity Based Costing”. Ultimately, there’s just one cost, the cost of the entire system. Activity-based costing integrates several procedures—value analysis, process analysis, quality management, and costing—into one analysis. Activity Based Costing includes both “Active Costs” and “Passive Costs” into the “Total Cost” of a product. Managing the whole value for software mean we focus on the entire lifespan of a product or company.

We move beyond the individual release and look long term to the life of the product (or the company itself). Software craftsmanship advocates designing and writing features that are easy to understand, support, adapt, and use both now and later when someone else needs to. Test craftsmanship is the art of doing the same for test cases and testing properties. Historically, testers have been at the end of a project, and usually under tight time constraints. For that reason, the idea of test craftsmanship often takes a back seat to the pressing need to just get stuff done and get it done fast! The net result, of course, is bad and poorly crafted tests.

Good software tests, when implemented, help reduce the various activity costs associated with testing. Good software tests aid communication between team members, help new testers get involved quickly, and have many opportunities for reuse. They do of course take time and analysis to make.

Ultimately, testing craftsmanship can be summed up in the following list (for detailed analysis of each, hey, read the book ;) ).

1. Plan the appropriate testing for the task.
 2. Know your audience and keep them in mind at all times.
 3. Don’t rely on someone else to clean up after you … even yourself.
 4. Refactoring and redesigning are not tools for excusing halfway effort.
 5. There will be no second chances to get it right due to the escalating technical debt you’re currently compounding.

Bad test case craftsmanship and bad code craftsmanship are probably the two most unnecessary costs incurred as part of doing business by software companies today. The software craftsmanship movement is taking on the developers in this regard. Testers, how about we pick up the charge and agree to be held to the same standard?

Thursday, December 22, 2011

Into the Blue Again...

It's amazing to think that 2011 is almost over, and yes, while last year I lamented writing the obligatory "year that was" letters and somewhat lampooned them with my post last year titled "Well, How Did I Get Here?", that post resonated with many people. It is to date my most read and my most commented on article here on TESTHEAD, depending on which metrics you believe.  Based on the response to that post, I decided this year to just let it be known, this is a recap of the World of TESTHEAD, and the world of "Michael Larsen, Tester" for the year of 2011. The title this year is indeed, again, in homage to the seminal 1980 Talking Heads classic "Once in a Lifetime".


2011 was a year of transition for me personally. I took many leaps of faith this year, and as the title says, I willingly jumped into new areas and new responsibilities. Early in the year, I ended my employment with Tracker Corp, bringing to an end six years of learning, camaraderie and a focus on the .NET world of software development and testing. In exchange, I came to Sidereel, and a world of learning, camaraderie and Rails software development. This is telling, because I'd never worked with Rails before, and my involvement with Ruby prior had been from recommendations from co-workers that it would be fun to learn. Well, now it was more than "fun to learn", it was an occupational hazard (and necessity :) ).

With that, I started mapping out and learning a new site, a new programming language, a new model, a new way of storing data, and very different approach to developing software. I was no longer just a tester, I was to integrate with a fully Agile development team and work with and alongside of them. Oh, and I traded in a daily diet of Windows and PC's for a daily diet of Mac OS X and Darwin UNIX all sleekly wrapped in a Macbook Pro. Oh UNIX, how I have missed you!!! There  was just something comforting about leaving behind the world of MSI and EXE files and embracing tools such as ruby gems, homebrew and other options for installing software. Scriptable, customizable, and where Test Driven Development and Continuous Integration were not obscure buzzwords but actual practices that were, well, practiced! It's also been telling, humbling, and intriguing to learn about and use tools like Ruby, RSpec, Cucumber, Capybara, Selenium Web Driver and other areas of automating testing. I can safely say I have written more code this year than I have in the past 17 years prior!

2011 also saw the process of Weekend Testers Americas come into its own. What could have been a few experimental and jerky first few sessions got smoother, cleaner, and better understood, and we had some great successes during the year. While I'm not sure how much others have learned, I know that I learned a great deal from this process. What was great to see was that this initiative was embraced by people all over the world, and our participants reflected this fact, including testers who would come into our sessions at 12:30 AM (yes, after midnight) from India to participate. First off, that's dedication, and my hats off to everyone who did that, but more to the point, it spoke volumes about the service we were offering and the fact that people wanted to come in and participate, even at those insane hours. We had some help from some heavy hitters, too. Michael Bolton and James Bach both came in to guest host some of our sessions ("Domain Testing" and "Creating Testing Charters"), and Jonathan Bach helped me craft one of my breakaway favorite test ideas of this year, that of "Testing Vacations". In all, it was a banner year for Weekend Testing Americas, and I am so thankful for all of the participants that helped make it possible. I'm especially thankful for Albert Gareev, who in addition to being a regular participant, stepped up to become my partner in crime for this enterprise, and frequently helping me develop new ideas or take the process in different directions than I probably would have had I been left to my own devices.

2011 was a year of meeting and developing relationships with other testers. In January, I met Matt Heusser in person for the first time. As many of you know, one of my most involved and enduring professional relationships was with (and continues to be with) Matt. I produce the "This Week in Software Testing" podcast with him. I helped write a chapter for a book he was the principal editor for (more on that in a bit). I also was a sounding board for other ideas and offered several of my own in return. I had a chance to meet my fellow Weekend Testing Compatriots Marlena Compton, Markus Gaertner, and Ajay Balamurugadas in various places. Marlena and I had the pleasure of live blogging the entirety of the Selenium Conference from San Francisco, with our comments getting us branded as the "Table of Trouble" from the other participants. That was a fun memory, and it helped to set the stage for liveblogging other events throughout the year. Geting the chance to meet so many testers during this year in various capacities was a real highlight and much enjoyed aspect.

2011 also saw my commitment to being published. I made a decision that I wanted to write beyond the scope of TESTHEAD. As will probably come as no surprise, my first few articles were Weekend Testing based. However, I had the opportunity to venture into other topics as well, including two cover stories for ST& QA magazine; one being my article about "Being the Lone Tester" and another an excerpt of my chapter from "How to Reduce the Cost of Software Testing".  Speaking of that, 2011 saw me and 20 other authors get our names in print and become book authors. It was a pleasure to have the chance to write a chapter for "how to Reduce the Cost of Software Testing". A later development, one in which I, literally, just got word about and accepted, was a potential new book that discusses "The Best Writing in Software Testing". I have agreed to be a junior editor for this project, and we are aiming for a 2012 release of this title. In addition, I also published articles with sites like Techwell, the Testing Planet and Tea Time With Testers. As of now, I have eleven articles that have been published external to TESTHEAD, and it is my hope that I'll be able to write more in the coming years.

2010 was a first in that I attended my first testing conference. I made the commitment then that 2011 would be the year I would present at a testing conference. I received my opportunity to do exactly that. My first ever conference presentation was just 20 minutes, and it was at CAST 2011. I presented in the "Emerging Topics" track and discussed Stages of Team Development lessons I had learned from Scouting, and how they could apply to Testers. All in all, it went well, and even today, I still hear from people who said they appreciated the topic and liked my presentation. In addition, I also gave another full track session at CAST called Weekend Testing 3-D, where not only did i discuss how to facilitate Weekend Testing style sessions, we actually held a live session with participants from all over the world, and processed it in real time (this was the earlier mentioned "Testing Vacations" session that Jonathan Bach helped me develop. In addition, I proposed a track talk and paper for the Pacific Northwest Software Quality Conference titled "Delivering Quality One Weekend at a Time: Lessons Learned in Weekend Testing" and after writing the paper and  having it reviewed several times, received the nod to present it. However, fate struck, and I broke both bones in my lower leg (tibia and fibula), thus preventing me from delivering the talk (the organizers of PNSQC, however, still included my paper with the proceedings). Additionally, a friend who felt bad that I couldn't present at PNSQC forwarded my paper to Lee Copeland, the organizer of the STAR conferences. Lee liked the paper and asked if I'd be willing to present it at STAREast in April, 2012. I of course said YES! So I will get my chance to present this paper yet :)!

There is  no question that I learned a great deal from the TWiST podcast, both as a producer and as an active listener, but 2011 will be even more memorable in that I graduated from editing the show and as an occasional guest to being one of a handful of rotating regular contributors on the mic. It's been interesting to have people email me and say "hey, I heard your interview last week, that was a great show and a great topic, thanks for your comments and explanations". I thought it was especially cool when I had someone say that they felt that I'd make a great game show host (LOL!).

2011 saw my continued focus on working with the Miagi-do School of Software Testing. At CAST 2011, a number of us Miagi-do Ka, including Markus Gaertner, Ajay Balamurugadas, and Elena Hauser worked along with Matt Heusser at the CAST testing challenge. During that competition, I had the chance to show Matt and the other testers there what I was able to do, and due to that experience, Ajay, Elena and I were awarded our Black Belts. While the experience itself was great, it also came with the expectation that I be willing to mentor and teach other testers, an opportunity that I have gladly taken on and look forward to doing more of in 2012.

One of my most active projects for the year of 2011 was helping to teach the Black Box Software Testing courses for the Association for Software Testing. I had the opportunity this year to instruct, as either an Assistant or as a Lead Instructor, all three courses offered in the BBST series (Foundation, Bug Advocacy and we just completed the pilot program for Test Design on December 10th). I was in this capacity that I was also nominated to run for the Board of Directors for the Association For Software Testing. I never envisioned myself being a Director of anything, much less an international software testing organization! Still, someone in the organization felt I deserved a shot, and nominated me. What's more, someone else seconded it. Even more amazingly, a lot of people (perhaps many of you readers) thought I'd be a good fit for the position as well, since I was indeed elected to serve on the board. My two year term began in October. While daunting, it is also exciting to think that I may actually help shape the future of this organization in the coming years, and to help represent my fellow testers. Believe me, it's not something I take lightly.

Quite possibly the biggest "Into the Blue Again" moment of the year, though, happened at our first AST board meeting in October. It was at that meeting that Cem Kaner and Becky Fiedler announced their desire to have someone take over as the Chairman of Education Special Interest Group. While a part of me felt I was wholly inadequate for the task, another part of me felt that this was something essential and that it needed someone to spearhead it so that the education opportunities within the organization could be championed and further developed, while allowing Cem and Becky the opportunity to do what they really wanted to do, which was develop more and better testing courses. With that, I offered to chair the Education Special Interest Group. I'm not sure what was more surprising, the fact that I offered, or that the rest of the board took me up on it! Two years ago, Cem Kaner was a man whose books I had read and whose presence loomed large as a "testing guru" on high. The thought I would ever meet him seemed remote. The thought I'd actually take over for him and spearhead an initiative he championed never even crossed my mind!!! Still, that's what has happened, and I guess 2012 and beyond will tell us what I actually did with it. I'm hoping, and working towards doing, all I can to prove worthy and up to the task.

2011 was, really, a year where I took leaps of faith, lots of them, and discovered that I could do even more than I ever imagined I could. I've shared may of those journeys in TESTHEAD posts, and I thank each and every one of you who are actively reading this blog for your help in motivating me to take these leaps of faith. It's been another banner year for me, both in learning and opportunities. Overall, the experiences of the past year have given me confirmation that, if I were to jump "Into the Bue Again", that it would be a great chance to learn and grow, regardless of whether or not the outcome were necessarily successful, lucrative or advantageous. Granted, most of them have been, and those that haven't been, well, I'd like to think I failed quickly and early enough to learn from those experiences and correct my trajectory. Time will tell if that's true, of course. As in all things, there were many people that helped make 2011 a banner year for me.


Thanking a bunch of people is always fraught with danger, because invariably someone gets left out, and there have been hundreds of people who have been instrumental in making this a banner year for me. Still, there are many that stand out, so to that, my heartfelt thanks to Adam Yuret, Ajay Balamurugadas, Albert Gareev, Alex Forbes, Anne-Marie Charrett, Ashley Wilson, Becky Fiedler, Benjamin Yaroch, Bill Gilmore, Cem Kaner, James Bach, Janette Rovansek, Jason Huggins, Jon Bach, Lalitkumar Bhamare, Lee Copeland, Lynn McKee, Markus Gaertner, Marlena Compton,  Matt Heusser, Orian Auld, Rick Baucom, Selena Delesie, Shmuel Gershon,  Terri Moore,  Thomas Ponnet, Timothy Coulter, Will Usher and Zach Larson. Thank you all for helping me make those leaps of faith. More to the point, thank you for having the faith in me that I'd be able to actually do what you believed I could do! Thank you for what has honestly been, at least as far as software testing is concerned, my greatest year (and remember, last year was pretty awesome, too. I didn't think I'd be able to top that!).

Here's to an every bit as exciting and fun-filled 2012. I'm looking forward to seeing where I might leap next :).

Monday, October 10, 2011

BOOK CLUB: How to Reduce the Cost of Software Testing (12/21)

For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?

Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry.

We are now into Section 2, which is sub-titled "What Should We Do?". As you might guess, the book's topic mix makes a change here. We're less talking about the real but sometimes hard to pin down notions of cost, value, economics, opportunity, time and cost factors. We have defined the problem. Now we are talking about what we can do about it. This part covers Chapter 11.

Chapter 11: You Can't Waste Money on a Defect That Isn't There by Petteri Lyytinen

Ideas such as technical debt and the need to be faster to market and to get the product released faster are always consideration we have to contend with. In some ways, these can be anywhere on the spectrum of benign to truly dangerous. The amount of time and the proximity to release tends to determine where on the spectrum your organization or project may fall. Make no mistake, though, technical debt and chasing ways to cut corners as the release gets closer happens. Focusing on immediate needs often cause the technical debt to grow, not shrink. While it's possible to enhance testing as a standalone process, why should the testers have all the fun? Petteri suggests that developers have a chance to contribute to the decrease of the cost of software testing as well, by focusing on techniques like Test Driven Development, Continuous Integration, and Lean principles of Software Development.

Here's a heretical thought. Want to reduce the cost of software testing? Bring up the skill of your developers. More to the point, encourage your developers to develop software from the approach that Uncle Bob Martin refers to as 'the Software Craftsmanship Movement". Central to that is the idea of Test Driven Development, and the ccircular process of developing software with tests in mind first. Having the test fail first, then coding to get the tests to pass, and then refactoring and repeating the process.

It should be noted that TDD does not resolve all of the issues; there's still plenty to test. What TDD does do, however, is it takes many of the solidly boneheaded issues out of the picture. My work with SideReel is an example. Yes, there are occasional issues that I find or details that may not be specifically implemented the way they should, but I rarely come across a truly bone-headed omission or a really truly "broken" implementation. the developers has mostly resolved those issues through actual TDD processes, so I can vouch for them being effective :).

Continuous integration (CI), is the process where all new code committed gets immediately built and deployed to a server and where new features and functionality is immediately tested. Along with TDD, this helps developers commit small or large scales changes and quickly see how their changes effect the rest of the environment and application. Coupled with TDD unit tests, and a smoke test run from the testers side, testers are freed up to focus on exploring the new changes and seeing if the changes have additional issues or if they are slid enough to be deployed. Another benefit of CI is that developers can quickly see where the changes "broke the build" and can back them out, make fixes, and then resubmit/retest. this helps with the ever present issue of finding issues late in the game. While it will never be completely eradicated, the odds of finding a problem that has never been tested or examined in conjunction with other components goes way down. Still, even with these enhancements, testers must never get complacent and think their work is all done. It's not. As E.W. Dijkstra noted: "Program testing can be used to show the presence of bugs, but never to show their absence".

The biggest benefit to using processes like TDD, CI and automated smoke tests is the hope and goal of eliminating needless time waste. As time and skills grows with the development team, downtime because of issues diminishes. It also helps to diminish the inevitable downtime between "bug discovery" and "re-test" with an updated module. Petteri suggests having the team sit closely together so that, when issues are discovered, the need to enter issues in a defect tracking system is not the bottleneck to a fix being made. rather, leaning over and saying "hey, developer person, check out the issue I just found here!" While tracking issues is not in an of itself a bad thing, it can be if the workflow is specific to alerting each member of the next step by changing states in the issue tracking system. Direct verbal updates are much faster. If immediate personal interaction is not possible, use Instant Messaging as the next best thing.

It's common to think that just having automated test scripts will solve all of the testing team's problems. They can help, up to a point, but as more and more tests roll in, and need the be run, the quick and dirty smoke test often grows into a more extensive set of feature tests, and their completion time grows longer and longer (why yes, I have experience with this :) ). it's impossible to test everything. Even simple programs could have millions of paths through the programs and testing all of them would be physically impossible, even with automation running day and night. thus the goal is not comprehensive testing, but targeted and smart automation testing. Combinatorics (pairwise testing being a popular version of this and a term known to many testers) can help trim down the number of cases so that the tester can focus on the ones that give the most coverage in the least steps.

A disadvantage to up front test cace development is that we just plain don't know what the test cases are going to be. we can guess, but it takes time to develop everything to get the true picture of the requirements and the coded features. Reworking these test cases later is a pain. Rather, Petteri describes a process called Iterative Test Development (ITD). When reading a user story, a use case, or part of a technical spec., write down a few brief lines about what needs to be tested. As developers start coding features, flesh out each test case in a simple format. When the feature is finished, fill in the precise details for the test cases and start testing with the full requirements s son as they are ready.

These examples (TDD, CI, and ITD) all point to the same goals and focus; they are meant to help make sure that the craft of developing software is first and foremost a sound one. ITD is the testers step to help make those development steps come into focus quicker and make sure that, as the developer focuses on the craft of software development and the processes that bring testing to their sphere, we likewise also develop our tets as the code is being developed, so that we do not waste time creating test cases that do not address the real code in question. Ultimately, this all comes around to helping answer Petteri's initial goal... you can't wast time on a defect that isn't there. Rather than focus on finding bugs, let's focus on preventing them in the first place.

Sunday, October 2, 2011

BOOK CLUB: How to Reduce the Cost of Software Testing (4/21)

For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?

Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry. Today's entry deals with Chapter 3.

Chapter 3: Testing Economics: What is Your Testing Net Worth? By Govind Kulkarni

This is an interesting chapter, in that it takes a departure of speaking directly to testers (or at least the way testers often see their role) and directly putting testers in the role of answering the following hypothetical question:

“Is your testing resulting in profit?”

The first two chapters discussed the value of testing and those of us who are involved in testing appreciate the value that it brings to the process of developing software, but how prepared are we to actually answer that question? We’re not accustomed to addressing testing as though it were a profit and loss item, but Govind goes into details that every organization should be aware of, and that we as testers would be well advised to learn about as well. The bigger question, reasonably speaking, is “What is the Net Worth of our testing?”

On one side, we could spend no money on testing at all, and thus it wouldn’t cost us anything. However, if we did that, the odds of us having a catastrophic problem in the software is greatly increased, and then the chance of people no longer using the product, asking for refunds, or even suing us goes up along with it. So yes, doing no testing would be a savings, but introduce other potential costs, perhaps much greater than the savings on testing. Spending money for testing is therefore seen as an asset, or at least an investment.

Govind makes the point that testing is often looked at as a non-essential, non-building cost. If it all “just worked” or everybody did an amazing job up front, there wouldn’t be a need to test at all. That puts the squeeze on those of us who do the testing. A theme is emerging here. Testing cannot make a claim to make actual money (well, unless your business is software testing contracts, then software testing absolutely makes money). If you work for an organization that sells software (be it as an application, web site or a service), the software itself is the revenue generator. Testing against it is a cost that allows for a potentially better marketing story for telling the software, but the testing itself doesn’t generate revenue. It just doesn’t, and making protestations otherwise are just plain wrong. It’s like buying a car without considering the maintenance that needs to be done to keep it running. The maintenance is an expense, but it’s an expense that helps protect the value of the car. You can save money by not doing it, but your car will also wear out faster and lose greater value than if you actually do the maintenance.

The key point of Govind’s chapter is to help educate the tester and others in the organization about the Net Worth of Testing, to help determine if indeed money spent on testing various projects was indeed a sound investment. Were test managers and testers able to speak to their efforts not just in terms of test cases and code coverage, but to demonstrate actual cost savings by showing how they can eliminate waste over the life of the testing projects we would be in a better position to demonstrate the real net worth of software testing.

So what is Net Worth? It’s a concept that comes from finance and it is the value of all assets minus all liabilities. If our assets outweigh liabilities, we are “in the black” and are profiting, or at least we have more assets than we have liabilities. If liabilities outweigh assets, then we are “in the red” or are in debt. In some areas, that may be acceptable, in the case of a home mortgage, because over time the full asset will belong to the individual. In business, though, if the net income is less than the net outflow, that’s not going to be a healthy long term situation.

So for a business, sales is important, but it has to outpace liabilities to be profitable. Thus the net worth of all actions in the business come into question, and yes, if that business is software sales, then testing has an effect on that bottom line… but in what way?

With testing the idea is that any defect discovered by testers be considered an asset on the side of the test team. Any defect found “outside” of normal testing channels (by customers, by support, etc.), be considered a liability. Defects found in production, additionally, have a higher weight than those found in testing, depending on their severity (cosmetic issues will have less weight than a system crash).

What makes this idea interesting is the fact that, for a defect to really be considered on the “asset” side of the ledger, it has to be more than just found. It has to be found, examined, confirmed, fixed and then tested again to confirm the fix works. Any defect that is found but not fixed would be considered a liability. Thus, as was pointed out by Kaner, Falk and Nguyen in the 90’s in their book “Testing Computer Software”, the best tester is not the one that finds the most bugs. It’s the one that finds the most bugs that get fixed. In this case, the Net Worth model fits very much in line with that philosophy.

So how do we approach this whole notion of Net Worth in our day to day testing? The goal is to have more assets and fewer liabilities. Sounds great, but how can we practically do that? It comes down to having a practical and focused test approach and a realistic understanding of when tests are run and under what circumstances. Did we miss a particular test case or scenario? Can we include it in future testing? Do we have the appropriate time to test? Is our environment indicative of real world scenarios so that we are looking at a true representation of what our customers can expect to see?

It’s important to note that just adding more test cases will run up against the law of diminishing returns. Just having a volume of test cases will only carry you so far, and they could actually cause you to be delayed, or you run into the problem that you have comprehensive test cases but not enough time to actually complete them, which then means that you have cases you cannot run, which opens up the risk of additional liabilities.

On the opposite side, we need to look closely at the reasons why issues that are reported are not fixed. There may be very good reasons, but I think if we as a testing organization start looking at those issues as what they are, i.e. liabilities that sit there like a credit card charge, then we might be more active and focused in trying to understand why they are not being fixed, and what we might be able to do to see that they ultimately do get fixed.

I think this is an intriguing idea. By looking at the defect tracking system and the rate of issues getting fixed vs. not fixed and considering those issues not fixed as though they are liabilities (again, debts), we arm ourselves with a unique way of thinking about those defects, and we also arm ourselves with a strong purpose to advocate why they should be fixed.

Wednesday, September 28, 2011

BOOK CLUB: How to Reduce the Cost of Software Testing (1/21)

For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?

Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry. Today's entry deals with the Forward and Preface.

Forward by Cem Kaner


Cem makes the point that we have been going through challenging economic times, and the software testing world has certainly been affected. There is no question that in challenging financial realities, the desire to control costs comes to the forefront in many organizations. But what are the costs of software testing? Can you name them directly? Are your impressions of costs correct? How about your organizations impressions? Do we want to have those who have no real understanding of the true costs of software testing (and of quality in general), making decisions just on the dollar values?

When we approach software testing and quality, there needs to be a balance. Cost is only one variable in the equation. Another vital variable is waste. How much of what we do in our day to day testing is busy-work, there because some department in an organization says they need it, but never look at the data provided? Is that waste? It is if the work that could be done is of greater value, but can't be done because we are too busy dotting i's and crossing t's. Note, I am not saying that having documentation or providing information is not important; information is the key deliverable of any tester. However, I feel it has to be the right kind of information, at the right time, in the right amount to tell a compelling story. If what we provide isn't optimized to do exactly that, then any of the "extra junk" is just that. It's junk.

So what's the answer? We as testers need to consider a different way of thinking. We need to up our game in the skills department. We need to focus on the activities that deliver high quality information and can provide as complete a story as possible so that stakeholders can make an informed decision. Learning about techniques will help, but so will learning what areas we should diminish or avoid altogether. How to do that? That's what the chapters in this book should help us all do :).

Preface


It's become a thing of legend now. Govind Kulkarni asks a fateful question on a LinkedIn group, and the whirlwind picked up momentum. 500 responses later, an idea is born. Let's create a book about reducing the cost of software testing, from the perspective of the testers themselves. Let's edit it collaboratively. Let's recruit authors from many different industries and from different countries. Let's use a wiki to share our ideas and develop the different areas of the book in parallel.

The net result is this book. Many people took  time to help coach the authors and help them deliver drafts of their topics and help them polish them until they were solid and ready to be included in the book.

There are three main areas; first, identifying the costs of software testing, and after reading the first section, you may have a different consideration of what those costs actually are and how they are approached (well, I certainly did).

The second part of the book relates to "What Should I Do?" A lot of that "what" depends on "who" you are. This book is not specifically written for testers alone. There is a great deal of valuable information for developers, development managers, executives, financial comptrollers, and others who have direct impact on where the dollars and cents of an organizations finances are applied.

The third section is "How do we do it?". There are many ideas presented from different vantage points and from different levels of experience. You may not find everything is applicable, but you will very likely find something you can take away and do right then and there.

The final section is an Appendix, and for those that just can't wait, there are key areas with immediate methods you can use to get to work cutting costs right this second. However, do you really want to prescribe medicine before you know what the sickness is? That's why I'll be reviewing those sections last ;).

I hope you'll join me next time so that we can dive into "Chapter 1: Is this the Right Question?"

Tuesday, February 8, 2011

BOOK CLUB: How We Test Software at Microsoft (16/16)

This is the second part of section 4 of How We Test Software at Microsoft. This is also the final chapter of the book. After three months of near weekly updates (some more often, some less often… sorry about that, this approach was a learning process for me, too :) ), this project has now come to an end. I will post a follow on to this final post that I will have a more conventional “total” review of the book and some comments on this BOOK CLUB process (will I do this again? What did I learn from doing this? What went well and what would I want to do differently in the future?), but first, let’s close out this endeavor with some thoughts from Alan regarding where testing may be heading and how Microsoft is trying to shape that future, both within their company culture and to help influence the broader culture outside of itself.


Chapter 16: Building the Future

Alan starts out this final chapter with the reminder that, by direct comparison, software testing is a new player in the culture compared to software development. Computer services offered to the public commercially began proper in the 1950s. In those days, software development and software testing were the same discipline; the developer did both. As the systems grew more complex and more lines of code were being written, and also fostered by developments in the manufacturing world, quality of the process became more of a focus and the idea that a separate, non-partisan entioty should be part of the process to review and inspect the systems. Thus, the role of finding bugs and doing “desk checks” of programs specifically as a development practice broke into two disciplines, where the software developer wrote the code and a tester checked it and make sure it was free of defects (or barring that, found what defects they could find).

Today, software testing is still primarily a process of going through software and verifying that it does what it claims to do, and keeping our eyes out for the issues that would be embarrassing or downright dangerous to a company’s future livelihood if a customer were to run across it. The programs written today are bigger, more complex and have more dependencies than ever. Think of the current IDE culture; so many tools are available at developers’ fingertips that they are able to write code without writing much of anything, it seems. Full featured utilities created with just twenty lines of code. Of course, those of us in testing know full well that that’s not the real story; those 20 lines of code contain references to references to objects and classes that we have to be very alert to if we want to ensure that we have done thorough testing.


As far as we can tell, the future is looking to get more wired, more connected, more blurring of the digital lines structuring our lives. The days of a discrete computer are ancient history. Nearly every digital device we come into contact with today now has ways and means to synchronize with other devices, either through cabled connections or through the ether. The need to test has never been greater, and the need for good testing is growing all the time. The question of this final chapter is simple, but by no means easy… where do we go from here?


The Need for Forward Thinking

In the beginning there was debugging, then we moved to verification and analysis. Going forward, the questions are not going to be so much “how do we verify that the system is working but rather, how do we prevent errors in the first place. A common metaphor that I use when I talk about areas where we have a stake in are two concentric circles. The inner one I call the sphere of control, the outer one I call the sphere of influence. When it comes to verification and analysis, that’s very much in the sphere of control for a tester, it’s something we can do directly and providce immediate value. When it comes to prevention, there are some things we can do to control it, but so much falls outside of our direct control, but it definitely falls into the sphere of influence. Alan recognizes this, and makes the point that the biggest gains going forward towards developing better quality will not be taking place in the verification and analysis sphere, but in the preventative sphere. The rub is, what we as testers can do to prevent bugs is a bit more limited. What we can do is provide great information that will help to influence the behaviors and practices of those who develop code, so that the preventative gains can be realized.

Thinking Forward by Moving Backward

I like this story, so it’s going in unedited :):

As the story goes, one day a villager was walking by the river that ran next to his village and saw a man drowning in the river. He swam into the river and brought the man to safety. Before he could get his breath, he saw another man drowning, so he yelled for help and went back into the river for another rescue mission. More and more drowning men appeared in the river, and more and more villagers were called upon to come help in the rescue efforts. In the midst of the chaos, one man began walking away along a trail up the river. One of the villagers called to him and asked, “Where are you going? We need your help.” He said, “I’m going to find out who is throwing all of these people into the river.”

Another phrase I like a lot comes from Stephen R. Covey’s book “The Seven Habits of Highly Effective People”. His habit #7 is called “Sharpening the Saw”. To picture why this would be relevant here, he uses the example of a guy trying to cut through a big log and he’s huffing and puffing, and he’s making progress, but it’s slow going. An observer notes that he’s doing a lot of work, and then helpfully asks “have you considered sharpening your saw?”, To which the man replies “Hey, I’m busy sawing here!” The point is, we get so focused on what we are doing right now, that we neglect to see what we can do, stop the process, repair or remedy the situation, and then go forward with renewed vigor and sharper tools.

How many software projects rely on the end of the road testing to find the problems that, if we believe the constant drum beat from executives and others who champion quality, would be way more easily found earlier in the process? Is it because we are so busy sawing that we never stop to sharpen the saw? Are we so busy saving drowning people we don’t bother to go up river and see why they are falling in?


All of us who are testers recall the oft mentioned figures of the increase of cost for each bug found later on in the process.

A bug introduced in the requirements phase that might cost $100 dollars to fix if found immediately will cost 10 times as much to fix if not discovered until the system test phase, or as much as 100 times as much if detected post-release. Bugs fixed close to when they are introduced are generally easier to fix. As bugs age in the system, the cost can increase as the developers have to reacquaint themselves with the code to fix the bug or as dependencies to the code in the area surrounding the bug introduce additional complexity and risk to the fix.


Striving for a Quality Culture

Alan points to the work of Joseph Juran and the fact that it is the culture of a place that will determine their approach to quality issues, and their resistance or lack thereof will likewise also have a cultural element to it as well. When I refer to culture here (and Alan, too) we are referring to the corporate culture, the management culture, the visions and values of a company. Those are very fluid as you go from company to company, but company mores and viewpoints can hold for a long time and become ingrained in the collective psyches of organizations. The danger is that, if the culture is not one that embraces quality as a first order factor of doing business, quality will take a back seat to other initiatives and priorities until it absolutely must be deal with (in some organizations, their lack of dealing often results in the closure of said company).

For many, the idea of a front-end quality investment sounds like a wonderful dream, but for many of us, that’s what it has proven to be… just a dream. How can we help make the step to earlier in the process? It needs to be a culture everyone in the organization embrace, one where prevention trumps expediency (or we could go with a phrase that Matt Heusser used on the TWiST podcast that I’ve grown to love… “If heroics are required, I can be a hero, but it’s going to cost you!” Seriously, I love this phrase, and I’ve actually used it a few times myself… because it’s 100% true. If an organization waits untiol the end of the process for last minute heroics, it will cost the organization, either in crunch time overtime of epic proportions, or in reactive fixes because something made itself out into the wild that shouldn’t have and, with some preventative steps, very likely could have been caught earlier in the life cycle.


Testing and Quality Assurance

“In the beginning of a malady it is easy to cure but difficult to detect, but
in the course of time, not having been either detected or treated in the beginning, it becomes
easy to detect, but difficult to cure.” –Niccolo Machiavelli


Alan, I just have to say “bless you” for bringing this up over and over in the book, and making sure it is part of the “parting shot” and summation. Early detection of a problem always trumps last minute heroics, especially when it comes to testing. Testing is the process of unearthing/uprooting problems before a customer can find them. No question, Microsoft has a lot of testers, and as I know quite a few of them and have worked with several of them personally over the years (as I said in the previous chapter, I worked at Connectix in 2001 and 2002, and a number of the software engineers and testers from that team are active SDE’s and SDET’s for Microsoft today). It’s not that they are not good at testing, it’s that even Microsoft still focuses on the wrong part of the equation…:

“YOU CAN’T TEST QUALITY INTO A PRODUCT!”


Testing and Quality Assurance are often treated as though they are the same thing. They are not. They are two different disciplines. When we test a product, it’s an after the fact situation. The product is made, we want to see if it will withstand the rigor of being run through its paces. Quality Assurance, by contrast is a process meant to be proactive and early in the life of a process or a product, to make sure the process delivers the intended result. It sounds like semantics, but it’s not, they are two very different processes with two different approaches. Of course, to assure quality, we use testing to make sure that the quality initiatives are being met, but using the terms interchangeably is both inaccurate and misleading (as well as confusing).


Who Owns Quality?

This is not a trick question, but the answers often vary. Does the test team own quality? No. They own the testing process. They own the “news feed” about the product. Others would say that the entire team owns quality, but do they really? If everyone owns something, does anyone really own anything?! Alan makes the point that saying the test team owns quality is putting the emphasis in the wrong place, and saying everyone owns quality is to de-emphasize it entirely. The fact is, the management team are the ones who own quality, because they are the one’s that make the ship decisions. Testing doesn’t have that power. The mental image of the “Guardian of the Gate” for testing is a bad one, as it makes it seem as though we are the ones that make the decision as to who shall pass and who will not, and we don’t. I’m a little more comfortable with the idea of the “last tackle on the field” because often the test team is the last group to see a feature before it goes out into the wild, but even then, there’s no guarantee we will catch it, or if we do stop it, that we can prevent them from going out into the field. Management owns that. The best metaphor, to me, is the idea of being a beat reporter. We get the story, we tell the story, as much as it that we know, and as much of it as we can learn. We tell our story, and then we leave it to the management team to decide if we have a shipping product or not.

In short, a culture of quality and a commitment to it must exist first before major changes and focus on quality will meet with success.


The Cost of Quality

The Cost of Quality is not the price of making a high quality product. It’s the price paid by a company when a poor quality product gets out. Everything from extra engineering cycles to provide a fix to lost opportunity because of bad press, to actual loss of revenue because a service isn’t working, all of these work into the cost of quality. Other examples of the price to pay when quality issues escape into the wild are:



  • Rewriting or redesigning a component or feature
  • Retesting as a result of test failure or code regression
  • Rebuilding a tool used as part of the engineering process
  • Reworking a service or process, such as a check-in system, build system, or review policy



The point that is being made is that, were none of these situations to have happened because testing and quality assurance were actually perfected to the point where no bugs slipped through (to dream… the impossible dream…), these expenses would not have caused the bottom line to take a hit. So perhaps the real cost of quality is what Alan calls the Cost of Poor Quality (COPQ).


Phillip Crosby says each business has three specific cost areas:


  • Appraisal (salaries, equipment, software, etc.)
  • Preventative (expenditures associated with implementing and maintaining preventative techniques)
  • Failure (the cost of rework or “do-over”)


To put it bluntly, preventative work gets a lot of lip service, but rarely do they actually get implemented.
Failure costs? We pay them in spades, usually way more often than the other types (overtime, crunch time, the death march to release, etc.).


The takeaway from many testers (believe me, if we could impart no other message, this would be really high on my list of #1 takeaways…:

We don’t need heroics; we need to prevent the need for them.


A New Role for Test

One of the great ironies is that, when testers talk about the desire to move away from the focus on late in the game testing to earlier in the process prevention of bugs, an oft hear comment is, “come on, if we do that, what will the testers test?” well, let’s see… there’s the potential for looking at the human factors that influence how a product is actively used, there’s performance and tuning of systems, there’s system up time and reliability, there’s researching and examining different testing techniques to get deeper into the application… in short, there’s lots of things that testers can do, even if the end of the cycle heroic suicide missions are done away with entirely (many of us can only dream and wish of such a world). Many of the more interesting and compelling areas of software testing do not get explored in many companies because testers are in perpetual firefighting mode. For most of us, were we given the opportunity to get out of that situation and be able to explore more options, we would welcome it gladly!

Test Leadership

At the time HWTSAM was written, there were over 9,000 testers at Microsoft. Seriously, wrap your head around that if you can. How do you develop a discipline that large at a company the size of Microsoft, so that the tech of the trade keeps moving forward? You encourage leadership and provide a platform for that leadership to develop and flourish.


The Microsoft Test Leadership Team

Microsoft developed the Microsoft Test Leadership Team (MSTLT) to encourage the sharing of good practices and testing knowledge between various testing groups and between other testers.

The MSTLT’s mission is as follows:

The Microsoft Test Leadership Team vision


The mission of the Microsoft Test Leadership Team (MSTLT) is to create a cross–business group forum to support elevating and resolving common challenges and issues in the test discipline.


The MSTLT will drive education and best practice adoption back to the business group test teams that solve common challenges.


Where appropriate the MSTLT will distinguish and bless business group differences that require local best practice optimization or deviation.


The MSTLT has around 25 members including the most senior test managers, directors, general managers, and VPs, and the are spread throughout the company and represent all products Microsoft makes. Membership is based on level of seniority and approval of the TLT chair and product line vice president. Having these members involved helps to make sure that testing advocacy grows and that the state of the craft develops and flourishes with the support of the very people that champion that growth and development.

Test Leadership in Action

The MSTLT group meets every month to discuss and develop plans to help grow the career paths of a number of contributors, as well as addressing new trends and opportunities that can help testers become better and (yet again) improve the state of the craft overall within Microsoft.

Some examples on topics covered by MSTLT:

Updates on yearly initiatives: At least one MSTLT member is responsible for every MSTLT initiative and for presenting to the group on its progress at least four times throughout the year.


Reports from human resources: The MSTLT has a strong relationship with the corporate human resources department. This meeting provides an opportunity for HR to disseminate information to test leadership as well as take representative feedback from the MSTLT membership.


Other topics for leadership review: Changes in engineering mandates or in other corporate policies that affect engineering are presented to the leadership team before circulation to the full test population. With this background information available, MSTLT members can distribute the information to their respective divisions with accurate facts and proper context.


The Test Architect Group

Another group that has developed is the Test Architect Group which, contrary to its name, does not just include a bunch of Test Architects (though it started out that way) but also includes senior testers and those individuals who are working in the role of being a test architect, whether they have the official title or not.

So what was envisioned for being a Test Architect? Well, here’s how it was originally considered and implemented:


The primary goals for creating the Test Architect position are:


  • To apply a critical mass of senior, individual contributors on difficult/global testing problems facing Windows development teams
  • To create a technical career path for individual contributors in the test teams


Some of the key things that Test Architects would focus on include:


  • Continue to evolve our development process by moving quality upstream
  • Increase the throughput of our testing process through automation, smart practices, consolidation, and leadership


The profile of a Test Architect:


  • Motivated to solve the most challenging problems faced by our testing teams
  • Senior-level individual contributor
  • Has a solid understanding of Microsoft testing practices and the product development process
  • Ability to work both independently and cross group developing and deploying testing solutions.


Test Architects will be nominated by VPs and would remain in their current teams. They will be focused on solving key problems and issues facing the test teams across the board. The Test Architects will form a virtual team and meet regularly to collaborate with each other and other Microsoft groups including Research. Each Test Architect will be responsible for representing unique problems faced by their teams and own implementing and driving key initiatives within their organizations in addition to working on cross-group issues.


Test Excellence

Microsoft created the Engineering Excellence (EE) team in 2003. The group was created to help push ahead initiatives for tester training, to discover and share good practices in engineering across the company (some of you may notice that I didn’t say “best practices”. While Alan used the term ‘Best Pracices”, I personally don’t think there is such a thing. There’s some really great practices, but to say best means thjere’s no room for better practices to develop. It’s a pet peeve of mine, so I’m modifying the words a bit, but the sentiment and the idea is the same thing.

The mission of the Test Excellence comes down to Sharing, Helping, and Communicating.


Sharing

Sharing means focusing on the following areas:


  • Practices The Test Excellence team identifies practices or approaches that have potential for use across different teams or divisions at Microsoft. The goal is not to make everyone work the same way, but to identify good work that is adoptable by others.
  • Tools The approach with tools is similar to practices. For the most part, the core training provided by the Test Excellence team is tool-agnostic, that is, the training focuses on techniques and methods but doesn’t promote one tool over another.
  • Experiences Microsoft teams work in numerous different ways—often isolated from those whose experiences they could potentially learn from. Test Excellence attempts to gather those experiences through case studies, presentations (“Test Talks”), and interviews, and then share those experiences with disparate teams.



Helping

One of the primary purposes of the test excellence team is to help champion quality improvements and learning for all testers. They help accomplish these objectives in the following ways:


  • Facilitation Test Excellence team members often assist in facilitating executive briefings, product line strategy meetings, and team postmortem discussions. Their strategic insight and view from a position outside the product groups are sought out and valued.
  • Answers Engineers at Microsoft expect the Test Excellence team to know about testing and aren’t afraid to ask them. In many cases, team members do know the answer, but when they don’t, their connections enable them to find answers quickly. Sometimes, team members refer to themselves as test therapists and meet individually with testers to discuss questions about career growth, management challenges, or work–life balance.
  • Connections Probably the biggest value of Test Excellence is connections—their interaction with the TLT, TAG, Microsoft Research, and product line leadership ensures that they can reduce the degrees of separation between any engineers at Microsoft and help them solve their problems quickly and efficiently.


Communicating

Having these initiatives is great, and supporting them takes a lot of energy and commitment, but without communicating to the rest of the organization, these initiatives would have limited impact. Some of the ways that the Test Excellence team helps foster communication among other groups are:


  • A monthly test newsletter for all testers at Microsoft includes information on upcoming events, status of MSTLT initiatives, and announcements relevant to the test discipline.
  • University relationships are discussed, including reviews on test and engineering curriculum as well as general communications with department chairs and professors who teach quality and testing courses in their programs.
  •  The Microsoft Tester Center (http://www.msdn.com/testercenter)—much like this book—intends to provide an inside view into the testing practices and approaches used by Microsoft testers. This site, launched in late 2007, is growing quickly. Microsoft employees currently create most of the content, but industry testers provide a growing portion of the overall site content and are expected to become larger contributors in the future.



Keeping an Eye on the Future

Trying to anticipate the future of testing is a daunting task, but many trends make themselves visible often years in advance, and by trying to anticipate these needs and opportunities, the Test Excellence team can be positioned to help testers grow into and help develop these emerging skills and future opportunities.

Microsoft Director of Test Excellence

Each of the the authors of HWTSAM has held (or is the current holder in the case of Alan Page) the position of the Director of test Excellence.

It’s primary responsibility is to work towards developing opportunities and the infrastructure and practices needed to help advance the testing profession at Microsoft.

The following people have all held the Director of Test position:


  •  Dave Moore (Director of Development and Test), 1991–1994
  •  Roger Sherman (Director of Test), 1994–1997
  •  James Tierney (Director of Test), 1997–2000
  •  Barry Preppernau (Director of Test), 2000–2002
  •  William Rollison (Director of Test), 2002–2004
  •  Ken Johnston (Director of Test Excellence), 2004–2006
  •  James Rodrigues (Director of Test Excellence), 2006–2007
  •  Alan Page (Director of Test Excellence), 2007–present

The Leadership Triad

The Microsoft Test Leadership Team, Test Architect Group, and Test Excellence are three pillars of emphasis and focus on the development and advancement of the software testing discipline within Microsoft.

Innovating for the Future

The final page of the book deals with a goal for the future. Since so many of Alan, Ken and BJ’s words are already included, I think it’s only fair to let them have the last word :)...

When I think of software in the future, or when I see software depicted in a science fiction movie, two things always jump out at me. The first is that software will be everywhere. As prevalent as software is today, in the future, software will interact with nearly every aspect of our lives. The second thing that I see is that software just works. I can’t think of a single time when I watched a detective or scientist in the future use software to help them solve a case or a problem and the system didn’t work perfectly for them, and I most certainly have never seen the software they were using crash. That is my vision of software—software everywhere that just works.


Getting there, as you’ve realized by reading this far in the book, is a difficult process, and it’s more than we testers can do on our own. If we’re going to achieve this vision, we, as a software engineering industry, need to continue to challenge ourselves and innovate in the processes and tools we use to make software. It’s a challenge that I embrace and look forward to, and I hope all readers of this book will join me. If you have questions or comments for the authors of this book (or would like to report bugs) or would like to keep track of our continuing thoughts on any of the subjects in this book, please visit http://www.hwtsam.com. We would all love to hear what you have to say.


—Alan, Ken, and Bj

Wednesday, October 12, 2011

BOOK CLUB: How to Reduce the Cost of Software Testing (14/21)

For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it's a great title and a timely one. The point is, I'm already excited about the book, and I'm excited about the premise and the way it all came together. But outside of all that... what does the book say?


Over the next few weeks, I hope I'll be able to answer that, and to do so I'm going back to the BOOK CLUB format I used last year for "How We Test Software at Microsoft". Note, I'm not going to do a full synopsis of each chapter in depth (hey, that's what the book is for ;) ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry.


We are now into Section 3, which is sub-titled "How Do We Do It?". As you might guess, the book's topic mix makes a change yet again. We have defined the problem. We've discussed what we can do about it. Now let's get into the nuts and bolts of things we can do, here and now. This part covers Chapter 13.


Chapter 13: Exploiting the Testing Bottleneck by Markus Gaertner


Markus starts out this chapter by stating that in many organizations testing is seen as the bottleneck for software development projects. It's often wrong. Requirements, architecture and code also play a hand into this, and all have their details fly under the radar until we get to the testing before delivery part. That's when lots of inefficiencies come to the fore, and then we have to deal with them all at once. To reduce the cost of testing, we we need to explore the whole project and optimize all steps where we can. This chapter uses a typical Agile project and describes how it could be optimized.


Agile projects use iterations to define the "heartbeat" of the project. After each iteration, the team delivers a "shippable" product. The team plans each iteration uniquely. Business priorities may change, so iterations allow us to adapt to customer needs. The team creates acceptance tests by developing user stories. Testers get involved right from the start. By provide estimates during the planning of iterations, helping the customer identify acceptance tests, and defining the risks in the software, these efforts provide an up front reduction in the overall testing effort.


The product owner maintains a product backlog of prioritized user stories. The team discusses which stories they think they can finish during the iteration. Anytime a new requirement is identified, a new story card is created. To identify priority, the product owner and a programmer will determine how much effort the feature may take. That programmer then checks that estimate with a tester to make sure they understand the time commitments. The point again being, testers are involved in planning and estimating right from the start.


Testers help the customer define basic acceptance tests for user stories. These tests will likely consist of simple happy paths and some corner cases relevant to the story in question. Testers help the customer and the programmer to think about critical conditions, which the team many not have initially considered. Problems are discussed immediately, rather than waiting until the testing phase of the project. Trade-offs are considered. How thoroughly the case is tested may depend on how critical the functionality is and how much effort should be applied.

As stories are chosen to be implemented, testers contribute their view on the testability of features. By identifying potential issues early on, testing costs can be reduced before any implementation is done. Programmers become aware of testing challenges. Testers can learn about potential pitfalls of seemingly easy to test story. When a team gets together early in the project, they can build a shared mental model. This helps reduce misunderstandings.


Collaboration is key. Pair programming, daily stand-up meetings and pair testing with another tester, a developer, or a customer are all parts of this collaboration. When the whole team sits together, testers get a more thorough understanding of the problems. Testers contribute greatly just by overhearing the team's talk. The daily stand-up is more than just saying what you have done the day before and will do today. By sharing progress and obstacles, we can build trust among team members. Testers do not hide problems in their progress. They discuss them openly. Because of this, testers are no longer left alone with the problems they encounter. Instead the whole team contributes to help solve the problems.


Testing, of course, occurs during each iteration. Setting some dedicated time aside to help the team learn new things (coding or testing related) helps the team prepare for possible future issues. Test Driven Development helps the testing process by including testing at the core of the development activities. Testers use Acceptance Test Driven Development to help develop tests that integrate with the features being developed. Exploratory Testing methods are used to inquire about feature behavior and follow paths that might not be initially consider.

Getting to Done means that the feature has been Implemented, Tested, and Explored
Implemented means Red - Green - Refactor (the Kent Beck model for Test Driven Development)
Tested means Discuss - Develop - Deliver
Explored means Discover - Decide - Act


The iteration is finally wrapped-up in a customer demonstration to get feedback about the just-developed features, and a reflection workshop helps the team to improve how they work. During an iteration demo, the just developed features are presented to the stakeholders. Since the features are shown in the working software, the development team receives direct feedback from the customer about progress.

Note, this just scratches the surface of the details provided in this chapter, but we can see already that there are many opportunities where the "testing bottleneck" can be avoided by having testing efforts be part of the project much earlier. Testing should not be done at the end of a project where a lot of scrambling needs to be done to examine issues discovered. There is lots of opportunity for up front testing, both from the developers and testers, and this up front testing can do a lot to help prevent a back up later on.

Acceptance Test-Driven Development allows us to examine requirements for a current iteration. By focusing on business-facing tests and meeting their expectations we can focus on meaningful tests. Outdated or needless tests can be eliminated.

Automated System Tests often flow from acceptance tests defined and delivered during a particular iteration. The team then has a large number of relevant tests that are automated and can be run at the press of a button. Over time the team can creates reliable tests, which can be run continuously.

Acceptance tests help spawn other tests. A tester working on a story can come up with additional tests which were previously not considered. those test can then be and allow for more functionality to be automated, freeing the tester to explore additional avenues.

Test-Driven Development's primary mission is to drive the design of the code. The Red-Green-Refactor cycle allows for the envelopment of robust and flexible code. This avoids a big redesign if an issue is discovered late in a project because testing had not been done previously, as is often seen in traditional software projects.

Automated microtests are a by-product of TDD. Since every single new line of code is tested even before it is written, lots of microtests are created as the code gets written. This leads to unit tests which are run by the developers before submitting their code. These automated microtests provide instant feedback. When they pass, the developers check in their code. If the build environment differs from the programmers environment, or there is an incompatibility, Continuous Integration builds will notify the team about the problem. The automated microtests provides nearly instant feedback in case some functionality does not pass these tests.

Everyone on an Agile team is a tester, not just the dedicated testers. The customer helps define meaningful tests right from the start. Developers use TD to help make sure the code does what it's supposed to do, CI builds help to determine if a change is incompatible with what's been checked in previously. Testers make sure that all working parts are behaving as they are expected to, and utilize an Exploratory approach to determine how the application behaves under a variety of circumstances. Automated tests help to make sure that steps are not forgotten.

The key to exploiting the testing bottleneck is not to make the tester work faster or harder, or get more testers, it's to understand that testing can, should, and must happen at all stages of the project. Agile methodologies are designed with this very idea in mind. By having the test processes start at the very beginning of a project iteration, the testing can be done at all levels of the project, from code creation to final system integration and everything in between. Testing is front loaded, not back ended, and thus the bottleneck, if not completely eradicated, can be greatly reduced.

Monday, April 29, 2013

TESTHEAD REDUX: Aikido and the Role of Certification

When I wrote the original post for this back in 2010, I think it was the first time I was willing to break out and say "I don't even know what I'm hoping to find with this". It was prompted by my looking at a variety of "certification" options out in the testing market at the time. Most of them I had just started to hear about, many of them were somewhat nebulous, all of them made me feel somewhat uneasy. At the time I said the following (note, I used my experiences with the martial art Aikido as an analog to my understanding of the certification landscape at the time):

In my mind, this is the big thing that is missing from most of the certification paths that I have seen to date. There is a lot of emphasis on passing a multiple choice test, but little emphasis on solving real world problems or proving that you are actually able to do the work listed on the exam, or that you genuinely possess the skills required to test effectively.  

The other issue that I have with this is that, just like in an actual real world confrontation, some of the best practitioners of Aikido may not be the best at articulating each and every step, but my goodness they are whirlwinds on the mat and on the street! This is because they are instinctive, and their training has been less on the intellectual explanation and more on the raw “doing”! 

The reason I mention these details is that I still, all these years later, have yet to find a true certification that actually leads to the goals I am after and desire, but I also have found several examples of exactly what I want to see certification become. In short, I want to see a certification that really lives up to the principles of Aikido. I want to see testing as a martial art in its own right (with perhaps a de-emphasizing of the "martial" aspect. Perhaps a better phrase would be a "philosophical art").

Before I get too far into this, I will say already that the three things I am going to suggest need to be taken with a very large grain of salt. Why? Because I have a vested interest in all of them, but not for the reasons you may be thinking. A disclaimer... I make no money from any of these endeavors. In fact, in some ways, I forego earning money in other ways so that I can champion them. If I wanted to make myself the equivalent of the impoverished warrior monk, or a Zatoichi, I may have found the perfect recipe in these three examples ;). Nevertheless, I do them because of the value that I believe they provide, and from the anecdotal value that others come back to me and say that they offer.


BBST - BBST is the Black Box Software Testing courses offered by the Association for Software Testing and others. Note, you can get very close to 100% of the benefits of BBST without ever taking a class. All of the materials (the lectures, the course notes, the readings, etc) are available online. What's not readily available online is the course quizzes and exams, and the ability to be coached by other testers who help instruct the course. There is a cost associated with it ($125 for the Foundations course, $200 for the Bug Advocacy and Test Design courses), but the costs are used to pay for the hosting of the instances of the class, the servers, and administrative overhead. At this point in time, every Instructor for BBST is a volunteer, i.e. we don't get paid to do what we do. 

Weekend Testing - while BBST is one of the best direct trainings out there, Weekend Testing is, IMO, one of the best organized skills workshops held on a regular basis for testers to sharpen their swords on a variety of topics. Weekend Testing works on a variety of levels. It has much to offer the beginner who wants to learn how to test. It has much to offer the intermediate tester who wants to mentor newer testers, and likewise learn more themselves. It has much to offer advanced testers who can work to develop their skills as leaders by facilitating sessions and designing interesting and unique content to talk about, learn from, and make for a positive influence in the broader testing community. Also, unique to Weekend Testing is the fact that every session is archived. If you would like to show someone just how much you contributed, it's there in black and white, for all the world to see.

Weekend Testing has several chapters that are in various stages of operation. Chapters have been formed in India, Australia/New Zealand, Europe and the Americas. Currently, the India and the Americas chapters are the most active, but all it takes is willing folk to facilitate and have a strong desire to help testers improve their craft for more chapters to open up where they are needed (hint, I would have no problem seeing a South America Chapter develop, and we have yet to see a chapter develop in Africa. So there's plenty of room to grow, as far as I can see :) ).

The Miagi-do School of Software Testing - this is the one that probably means the most to me, and yet it's the one that will, guaranteed, never make me rich. Well, none of them will, but unlike BBST and Weekend Testing, which could be used as a marketable option or product, Miagi-do cannot. Actually, I should say it will not, as long as the founders have anything to say about it. It's not a not-for-profit. It's a ZERO-profit. It's also a ZERO-income enterprise. No money ever changes hands, and likely, no money ever will. People have to seek out the school, have to show they are willing, have to face multiple testing challenges, and actually put in a lot of work that leads to the betterment at large of the software testing community. 

Everyone's path is different, but my path was through signing up with AST and learning about the BBST classes, taking them as a participant, and then offering to teach them over the past three years. It included my producing a podcast dedicated to software testing topics, and frequently researching and presenting my own findings in episodes where I was features as a guest or a panelist. It involved my getting into Weekend Testing as then offered in Europe, and making enough of a commitment to it to be considered knowledgeable enough to facilitate, and then bring Weekend Testing to the Americas, where I have fostered its growth and development (along with several others, to be fair) for the past two and a half years. It involved writing in many areas, including the book "How to Reduce the Cost of Software Testing", as well as multiple articles for other distribution channels (ST&QA, Techwell, Tea Time for Testers, The Testing Planet, plus guest blogs for numerous companies and outlets). 

In between all of this, I have sat for and failed, and then later passed, several testing challenges, each one with the idea that I would demonstrate to my testing peers what I knew how to do, and what I didn't know. The Black Belt/Instructor level that I hold in Miagi-do may be laughed at by some. What does it mean, really? On paper, and in the eyes of HR departments, probably less than nothing. If, however, you and others out there feel that the talks I have given, the sessions I have facilitated, the courses I have taught, the articles I have written, and the podcasts I have recorded have helped, in some small way, to the improvement and betterment of the software testing community, then my black belt speaks volumes. In fact, it's my hope that everything else I have done, and will continue to do, makes the mention of a black belt completely irrelevant. 

In short, I am not my "certification". I am the ideas and the experiences that went into it. The fact that my certification is one that I have made for myself, surrounded with like minded people that I respect and admire, means more to me than any certification that can be given to me by any "officially sanctioning body" and yes, I'll include my Bachelors Degree in that list of "inferior certifications".

While a "certification" may carry someone into a second round interview, I will much frankly prefer to see my fellow Miag-do ka, BBST instructors and Weekend Testing facilitators on any project I would hope to lead and own. Why? Because I already know what they can do. I've seen it multiple times. In a dark alley situation,  I already know they can fight, and I also know they won't run away :)!