September 30, 2013

Mind Maps as a Tool for Testing

A lot of people have written about mind map and testing recently because the tools for working with and sharing them have gotten so much better.  There are really smart excellent testers out there that use mind maps in their daily work flow, and I've been trying to add them to mine since I got back from CAST 2013.  This week I'll use them any time I would have written anything down.

For example:

  • My activity log for the day complete with bug and investigations will be a mind map.
  • Testing Plans
  • Exploratory Charters and the notes/results from them
  • Feature/Bug/Product outlines will also be done up this way. 
I hope to create 4 or more Mind maps a day for the next 5 days. It should be an interesting experience. 

Read on for some dryer text on what mind maps are and how I will be doing them. 

September 27, 2013

Random Data Tool: Wrap up

A week of using a random data tool has not shown the amazing benefit I hoped it would.  This might have to do with the tool that I choose to use, and might have been that I wasn't really using the tool the way it was designed.

I prefer to use self-validating data as often as I can, and thus random data isn't always useful. When it was useful, I used it, but ran into a few blocks. The service that I was using didn't provide all the types that I was  wanting (valid US phone numbers, multiple formats),  or didn't provide enough variation to suit me for the data types it did have (Like name, only used normal english names)  I was hoping that this tool would help drive me into some of the corners of the data set, but all it really did was keep me in the center of the path with different values that didn't diverge far from the norm.

September 24, 2013

Random Data tool

I was originally going to write about a API testing heuristic that I stumbled upon recently for this weeks blog series, but I'm not planning on doing any APi testing this week.  So instead I'm going to write about a new tool/website that I recently discovered for generating random data of various types. TestSpicer provides a set of  post/get api calls that will generate "Random" data for you.  Need users name for a test account? Call the name endpoint. Need an image? There's a call for that.  In Fact there are currently 19 different end points, many I won't use, but this week, I'll try a few and see if it helps me find some a bug by creating an account that I wouldn't.

September 23, 2013

RIMGEA: Wrap-up

Using RIMGEA for the past week has been a great exercise. It certainly improved my bug reports. I didn't get to the point of all my bug reports being fully RIMGEA or the bugs fixes that I reviewed being fully explored using RIMGEA, but I did see a improvement in my bug reports and bug exploration.

Lets look at these two sides of how I used RIMGEA and the challenges that caused me to shortcut using it fully.

RIMGEA as a bug reporting Heuristic:
This is the intended use for this Heuristic, and it works well here. If you are rushed for time, or have a large number of bugs to report, it's unlikely that you will use all of it. For some bugs I didn't get past the R(eplicate). Once I got my replication steps, I could write a convincing bug report.  However, this past week I tried to get further with every bug. My reports were definitely better.  Reducing all the possible variables out for the I(solate) was hard, and often I overlooked some, but looking for them certainly helped my testing in other areas.  I think I'll do this exercise every few weeks until I can consistently get into the last few letters of the Heuristic.  

RIMGEA as a bug verification Heuristic:
This was a bit of an off-label use, but I thought that if the bug hadn't had this done to it when it was written, the fix might not cover all of the incarnations of the bug either. It worked for me. Often replicating the bug in the pre-patched system showed the the bug had been fixed earlier and the new patch wasn't actually needed, or that there were other paths into the bug that had not been examined and fixed.

All in all, this was a productive exercise. I'll do this one again, and share it with my team.

September 18, 2013

RIMGEA: Mid week

This one has been a little easier to do. I'm not trying to use it outside of its scope, and that has helped. Additionally, I know that a well written bug report helps the developers find and fix the bug, but sometimes I really don't need as much information on a bug as RIMGEA would have me provide.  Nor I do not have the time to do all of RIMGEA for the six bugs that I need to file before I leave work for the day. That said, having a note on my desk reminding me to use it has increased the quality of the bug report, just not all the way.

One change that I didn't expect was the change in how I've been verifying bug fixes. Many of these bugs come on from external sources or non testers. Many get to the developers before a testers has had a look at them. Thus when they do get to me, they rarely have replication steps let alone any of the other items outlined by RIMGEA. Filling in a very quick RIMGEA has increased the depth at which I verify these bug fixes to almost a full exploratory testing session. Making the bug fix verifications more through. It's caught a minor issue that I might not have caught otherwise.

So Far So Good with this Heuristic.  More on Friday!

September 16, 2013

RIMGEA: A Bug reporting Heuristic

Last week I looked at a heuristic that helped me with testing, and planning my testing. It even helped me with isolating a bug and replicating it. If that was all that I did, it might be the only heuristic that I need in my daily work. (Un?)Fortunately I also find bugs, and bug reports are one of the main forms of written communication that exists for me or my team. Thus, a well written bug report that has all of the elements can be important, or at least as important as the bug we are reporting.

This week I'm going to concentrate on writing better bug reports by using the RIMGEA heuristic. This heuristic developed by Cem Kaner and gives the user a series of steps to go through to ensure that a bug report has enough information that others can correctly evaluate and prioritize the bug you are reporting.

R, Replicate: Ideally all bugs that we report are replicable, we might not understand all the variables involved in the bug yet, but in order to move on in this heuristic the bug needs to have a series of steps that will replicate the bug.  If it doesn't we need to spend some time coming up with some steps. If these steps only sometimes replicate the bug, try to refine them so that it can consistently replicate the bug or at least describe how often these steps can replicate the bug.  Sometimes how often a bug can be replicated using the same steps is an important piece of information.

I, Isolate: Any given set of replication steps will have any number of things that just happen to be that way, but the bug doesn't need them.  Identify these elements so that we know what is really involved in the bug. This step will help with the next two.

M, Maximize: This is to find out just how bad this could be if everything went exactly wrong.  The version of the bug that you have discovered or have been asked to improve the report for is not likely the worst case version. Since you have isolated the bug you should be able to determine how to make the effect greater.

G, Generalize: This goes part and parcel with Maximize and Isolate.  Since we know what the isolation is and how to maximize it, the other side of this coin is what is the easiest version of the steps, or what is the most common path a user could follow to trigger this bug.

E, Externalize: This one is important, not to the bug, but to your ability to inform others of the effect of the bug. To externalize, you describe the impact of the bug to the users, their data and their ability to complete the task that they are trying to complete with this application.

And blandize: Ok, they stretched it on this one to make it an A, but it is an important aspect of any bug report. To find a way to report your bug without making it personal. They could have used O for objective there also and it might have made it clearer to the users

Interestingly, I filed a bug report for work while writing this blog post. My bug reports will have to change a lot to this week. It's not that my bug reports are incomplete, I feel that I write them to the appropriate level of detail for who will need to read them. However, I didn't do some of these steps. It should be interesting.

September 13, 2013

SFDIPOT: end of week Review

Well it's been a week of concentrating on using SFDIPOT in my testing activities.  So did it make me a superstar tester? Did it show me every bug? Was my team amazed at the testing plans I wrote? Could it cure cancer and bring about world peace?  These are all (well mostly) important questions that I have to answer when I ask myself the most important questions: Will I keep using it? Will I encourage others to use it?

 Did it make me a superstar tester? No, but it did have some excellent benefits. It reduced my testing time be giving me better direction when I setup my testing plans.  It gave me better insight into bug investigations. It gave me new ideas of areas to test to add to my list already too long list of areas to investigate. I wasn't really looking for a single item to change and I'd become a superstar, I was looking to improve, and learn. That part of my mission was a success.

Did it show me every bug? No, but I was able to find some sooner than I would have otherwise. I don't think there is any tool or technique out there that will find every bug, the search space to find every bug is impossibly large. So a better metric might be did it help me find more bugs, or find the bugs faster? In my experience from this past week, yes it did.

Was my team amazed with my new testing plans? Not really, for me, it didn't change much. I can see it creating better  testing plans for some, but in my environment, working in a agile development team, I don't often share my testing plans.  My testing plans are usually something I write for myself so that I don't forget anything. Being a single tester on a team has lead to some bad habits, but I'll deal with those another week.

In all it's been a good week. I enjoyed this emphasis on SFDIPOT, and the extra thought it helped me do in my testing activities.  I didn't really use it as much as I meant to, I'll keep using and teaching it to the team of testers where I work, and I'll keep you updated if I learn anything new about it in that journey.

September 11, 2013

Mid-Week with SFDIPOT

Its been two days of my renewed attempt of learning SFDIPOT by using it for all of my testing activities.  So far I have used it in conjunction with a product tour, a bug verification/investigation, and a the test coverage planning for a single feature. I've been surprised to find that it helped for all three.  The bug investigation was the one that surprised me. I thought I might have to heavily modify SFDIPOT to use it, but it has started to flow nicely and more comfortably. Here's some quick thoughts. from the early week.

Understanding a technique takes time, and often trying to explain it to others. I think I learned the most from about SFDIPOT writing monday's blog post.  Michael Bolton and James Bach were both generous with their time and knowledge and review my post before it went live and discussed with me some the place where I wasn't truly relaying what they meant. There are still a few places in my post they are not completely what they mean, but they are in areas where I'm still not sure what they mean.

There's never too small a item to use this technique if it is stumping you. Yesterday I had a bug report come in from the field. The replication steps were simple, except they didn't expose the bug except at the customer site.  After spending some time trying to figure out what the difference was I pulled out SFDIPOT and took the time to do a structured approach to my thinking about my investigation. Very quickly I identified the missing variable and was able to get refined replication steps that reliably reproduced the bug.

The last thing this exercise has shown me is the the structured approach helps me even when I'm working with a old existing feature that I've tested many times.  Today I was able to quickly identify a place where testability could be improved while i was doing a code review and using SFDIPOT to guide my thoughts. I haven't even thought of the test coverage yet, but I already know it will be easier to test because I was able to fully think about the feature.

It's going well so far, I've even had some developers asking about it! I'll be back on friday with the round up!

September 7, 2013

San Francisco Depot

One of the things that I identified last week in my excursion into PCOs, was that while I'm familiar with SFDIPOT, I haven't used it enough for it to come naturally.  Thus my plan for this week will be to use it in every testing situation I can. Even some where it doesn't make sense at the outset. In order to really know the ins and outs of a technique, using it improperly can show you what and why the limitations are so that you can get the most of that technique when you use it.

Reading list:

September 6, 2013

Change of Plans, PCOs and Lean Coffee

I was going to blog today about my week of using PCOs, but it turns out that a 4 day week is not enough time to really get a feel for this technique. I did share it with my agile team, they were confused. I shared it with my testing team and they were very excited.  I'll come back and write the third part of the PCO experience next week at some point.

However, that doesn't mean I don't have something to talk about today!  Today, lets talk Lean Coffee.  I first experienced Lean Coffee at CAST 2013. It was a great experience there and I've been looking for ways to bring it back to Saskatoon ever since. Yesterday I did that in a small way.  

The test team that I lead has a weekly meeting where we get together to discuss various testing related topics that have come up in the past week. This has taken a few different forms, from learning sessions where we have a specific topic that someone teaches about to Highs and Lows sessions where we talk about the best and worst of the past week.  While these meeting were achieving my goal of getting the team together and helping solve their problems, I didn't feel the team was engaged. One member has been consistently asking to skip, and if I was away the meeting didn't happen.  I might have a solution, for my team at least.

Yesterday, instead of starting the meeting as I usually would by polling the room for impediments from the past week, I introduced Lean Coffee. When I was done and they set to it, every person had put forward at least two topic ideas.  While we didn't have a lot of time, the dot voting did direct us to the important items quickly. The few topic cards that were left when we finished were either filed as: 1) lets do this as a one on one later, 2) This can wait until next week, or 3) that other topic card covered what I wanted from this card.  It was a great experience and I felt my team more engaged in the meeting than I have in months.

If you haven't been to a lean coffee, you should try it. It will change the way you think about small meetings.  It works really well for short meetings of people with a related interest to spur on great  conversations that might not happen because you don't know that others are thinking the same things.   I'm hoping to spread lean coffee further at my workplace and even around saskatoon.

For more reading about lean coffee see here and here

September 4, 2013

PCOs, two days in.....

Well not really two days in. It's a short week here in Canada with Monday being a holiday, but I did review some of the reading that I said I would read before starting on this weeks subject. It gave me good insights into what I would be trying to add to my existing processes.

So far the exercise has lead to some interesting insights into the product that I work with, and it certainly will inform my testing and test planning. I have noted however that as I usually work mostly on features and feature integration, a lot of this felt  like extra effort. I'm sure it's not, but time will inform that.

Additionally, as I don't often use the SFDIPOT heuristic, it felt clumsy. I can see that with practice it will become a very useful tool to help my thought process, but I spent too much time trying to figure out what went under each of the headings.

Come on by again on friday for my summary of using PCOs for an entire week.

September 2, 2013

Monday: Product Coverage Outlines

A product coverage outline is not my invention, or my term.  I picked up the term from Paul Holland at CAST 2013, but the concept is something that most of us are already doing. However without formalizing the activity we might not be doing it as well as we could be.  I currently do something like this as a mental exercise, but I don't think I do it very well that way. I think I miss things because I don't write it down or review it.  This week I'll be trying out a more formal written Product Coverage Outline.

A Product Coverage Outline (PCO) is a document (doesn't matter what kind) that you develop to assist you in thinking about the product that that you will test.  It is important to note at this point that this document is about the product, not the testing and it is not the documentation. Paul even suggested that you do not read the documentation until late in the process of creating this document so that it doesn't limit your thinking. There are many ways of doing creating this document and all of them are ok, as long as it is something  you and/or your team developed to help illuminate the product.  I've seen great mind maps that expressed a product coverage outline, excel documents and word docs.  Some of these are easier to read and work with than other, but all did the job. This is because the job is not just the document, it's the process of creating the document the stirs our thought process and helps us ensure that we have thought about as much as possible. That way when we go to create our testing plan, we can consciously choose what's being covered and not instead of some areas not being covered because we didn't think of them.

That's a lot of talking about a PCOs without saying much. What is the goal? The goal is to create a document that you could give to any tester on your team to give them a concise overview of the product that will help them decide how and what to test in the time that they have.  Combined with a heuristic approach to exploring the product you should be able to quickly create a document that achieves these goals.

For this week I'll be using the SFDPOIT heuristic to help me ensure that my PCO's are complete. If you've never heard of SFDPOIT check this out.  My PCO will be done with a Mind Map and have first level nodes for each of the terms of SPDPOIT, but that isn't required for a PCO, but I think it will help.

OK, how about an example? Since I obviously can't show anything from my job, lets take the classic triangle problem.  I looked around and found a triangle calculator here. If I use the SFDPIOT heuristic and a mind map I can quickly create this PCO:

I spent maybe 15 minutes doing this PCO, from it I should be able to come up with a reasonable Testing plan.

See ya on Wednesday!

September 1, 2013

My Plan

Ok, I've heard the biggest problem people have with writing and blogging is not finding a topic to write about but more precisely they run out of things to write about because they their topic is either too broad and they hit writers block or their topic is too narrow and they exhaust it.   I've heard that the best solution to this is to find a topic to write about that you are passionate about and to have a plan.

Well since you are reading this, you can tell my subject matter. I'm obviously here to talk about software testing,  but it's more than that.  I'm going to do a three part explore of my subject matter.  My regular blog pattern will be a Monday/Wednesday/Friday pattern of experience reports. On mondays I'll lay out the new technique  that I'm going to be exploring that week. On Wednesdays the blog will be shorter, as it will be a progress report on how it's going so far.  Fridays will recap the week and explore what I liked about the new technique, what problems I had and solutions if I found them.

 If you'd like to join me on my journey, I'll try to announce the topic for the next week on the Saturday or Sunday so that you have time to do some reading.  This weeks topic will be Product Coverage Outlines.

Reading List: