October 8, 2013

Off Week

I was planning to blog about API testing this week, but I seem to have a really busy week this week.  The local testing user group is meeting on tuesday and the Saskatoon's First Lean Coffee is this thursday. Currently I lead both groups. So I'll be back next week with API testing. If you have any heuristics or tools that you use for API testing,  Please feel free to put them in the comments on this post and I'll include them in next week's series.

October 6, 2013

MindMaps In Testing: Wrap up

So I spent a week without using a notebook, a testing management tool or any notes except for  mindmaps. It was an interesting week, I think I actually took better notes, missed fewer dark corners of the applications that I tested and was able to share my work with others easier. Alot if this was made easier by the use of colors on the nodes in my mind maps.   I used bold colors for things that I needed to come back to later, and light variants once I had come back and done what ever action was needed.


Lets take a look at what my process looked like.  Every Day I reviewed my Maps from earlier in the week to see if there was anything I needed to pull into todays map as a item I planned to do. So before my day even started I had a map with a center node that was the days date, surrounded by any tasks that carried over from the previous days and the items I planned to do that day.  If there was a task that carried over the the previous day, I would copy the node over with every thing that was under it so that I had my notes from the earlier work to guide my continuing work.   I found that my exploratory testing was very much assisted by this exposure to my previous results. I was able to make links across sessions and generate new charters with better insight than I had when I used a notebook for this same process.

As the day progressed, the first level nodes that I was working on would progress through several states denoted by color. White for upcoming tasks, yellow for the currently active task, Red/Purple for tasks that needed to be revisited, and Green for completed Tasks.  If that was all that I had done, this would not be as great a tool as it ended up being.  Also made notes on the tasks, and did my test planning under the testing tasks.  Since I used an electronic MindMap tool this allowed my test plans to be fluid and dynamic based my results.  Results from the tests were also recorded in the MindMap as were all of the defects and other issue I noted. Again these were color coded to allow me to make notes about them and not get distracted by opening my bug reporting tool  and spending time then reporting, but rather making enough notes that I could come back later to do a RIMGEA analysis and better report.


I found this technique very useful. My entire day was in one place, I could break pieces out to share and collaborate with others,  and I missed less. I will continue to use MindMaps in this way, and will continue to look for new ways to use MindMaps in my work.

October 4, 2013

Schedule Change

I've realized that my blog schedule isn't working very well.  Writing Monday morning seems ok, but I really seem to be writing this post Sunday Night, So I' going to try and Release the Monday post either Sunday Night or Early Monday. The wednesday post doesn't seem to have enough exposure to the tool or technique to have a lot of insight, So I'm going to move this post to Wednesday Night. Same goes for the wrap up post, by writing to for Friday Morning,  I'm losing a day of exposure to the topic before I write, so this will be Friday Night/Saturday Morning depending on my availability.

Tune in tomorrow for my MindMaps Wrap-up.

October 2, 2013

SFDIPOT.... Huh?

One thing last weeks exercise showed me was that while I'm familiar with SFDIPOT, I haven't used it enough to be comfortable with it. So let dive in,  I'll explain what I know (or think I know) about using it and how I plan to use it this week. If I make mistakes,  I fully expect the community to help out and show me where I'm using SFDIPOT wrong, and I'll pass that on to you.  The original blog post about this by James Bach on his blog here. He's mentioned that developed this heuristic from surveying six or seven computer science textbooks to lay the framework for his thoughts on application modeling. 

As I understand it SFDIPOT is a heuristic to use while doing a software tour to ensure that you have a complete understanding of the application.  This is sort of how I use it. I actually do several tours of the application/feature in questions, all very quick and high level while thinking about each of these. I don't do them in the order of the Heuristic simply because I don't really think that way. 

So what does it mean? 

S: Structure, how is the application put together/wired up. These are things like: Uses Mysql as a server, Has a separate authentication server,  Uses memcache and Uses Ajax. Things that can lead you down other vectors that you might not have thought of while doing a simple product tour. You might need to get help/information from the developers and or Documentation for this piece, as such I recommend doing near the end of the heuristic. 

F:  Functionwhat are the functions of the feature/application. What are the individual things that it does.  This can be a (organized) listing of all of that actions that can be done using the application. These are smaller than the user stories, these are things like "Mark favorite" and "Join mailing list". Stories will come later and will invoke many of these functions. 

D: Data, what are the data types involved in the application. Data types can lead to interesting test cases. Have a look at Elisabeth Hendrickson's cheat sheet for some great ideas on how to test your data types once you have identified them. 

I: Interfaces,  one of the newer letters in this heuristics (and thus harder to get information on), these are the interfaces that the application interacts with the outside world on. Does it have a file based interface anywhere (import/export  functions?)  Does it store intermediate files to a local storage anywhere? Is an api provided? Does one exists that isn't provided? And yes, the GUI is an interface, does it interact in the ways that a user will expect that kind of interface to interact.

P: Platforms,  What are the platforms does this application run on? Is it a java app? Does the version of Java matter? Is it a Web app? Does the browser version or vendor chance the performance or interactions?

O: Operations, What are the operations that the application is trying to accomplish? What is the purpose of this application? What are the mistakes can can happen when trying to do this operation?  I've been asked if these are your User Stories and Test Cases, and the answer for me is kind of. User stories are definitely things that are thought of here, but not just the stories, how those stories can go wrong and or get interrupted.  Test Cases however are part of your test coverage. I'm using this heuristic to create a PCO which will in turn inform the creation of my test coverage.

T: Timings, Another recent addition to the Heuristic, Timings are what are the time based things you will have to think about with this application?  Does your application have to run a specific time? What will happen if it runs on Feb 9th?  Or runs over new years Eve? in 1999? What happens when two actions are done too quickly? Too slowly? A recent conversation with also pointed out to me that these can be environmental. Where is the application? What is happening at the same time? Is it rush hour at that time at that location? 

As you can see, if you run through all of these, you will have a very extensive list of things to think about while designing your test coverage.  As I said in the beginning, this heuristic while seemingly simple does take some practice to use, and that is my goal for the week. To use this heuristic and get better with it. My plan is to use this in my everyday testing, for every feature and bug report from the field that is passed on to me. I realize that this might not align with the it's best uses, but I'm game to try, maybe I'll figure out something new about the heuristic, or  about myself. 

I'll let you know how it's going on Wednesday!

If I have anything wrong or misinterpreted in this, please let me know in the comments below.

MindMaps in Testing: Mid-week

A few days in of increased use of MindMaps in my daily routine and I have seen some interesting results. Like any technique that is new, the novelty of doing something a new way can drive higher short term adoption, and I have been able to consistently use my day notes mind map to track my day. I would normally try to write this in a log book, but I tend to forget. The ability to color code the nodes however has made this more likely to continue after the "New way" interest has worn off.

So what have I done with mind maps so far? Two of my common tasks have been moved over to mind maps.  Test planning for newly completed features, and my daily activity log.  I have in fact combined these two into a single map, one for each day, where testing won't span more than a day.  Where testing will be a multi-day or multi tester effort, I've split it out into a separate map.  The daily activity map has helped me track my day and make notes on the activities that I do. I've marked with color finished tasks, blocked task, bugs found, future investigations and test results. The color enables me to quickly look and see what I need to remember. I used to do this with my notebook, but it's been hard to find the tasks and future investigations easily later when I go looking.

The tool that I've been using has a mode where it will auto color the nodes for you based on a color change at the leaf level to indicate that you have completed a task or  been blocked if you are using your MindMap as a plan/to-do.   I don't usually write detailed plans, I write down a list of areas I want to test, and some notes under each of those about why or how of that testing area.  This feature the MindMap software that I'm using allows me to quickly look back and see what area's I have left to cover in upcoming charters, and update based on the charters that I've completed.


All in all, MindMaps have been a good addition to my work. I haven't had to share the information in them with others yet, but I expect that to be helpful also.

September 30, 2013

Mind Maps as a Tool for Testing

A lot of people have written about mind map and testing recently because the tools for working with and sharing them have gotten so much better.  There are really smart excellent testers out there that use mind maps in their daily work flow, and I've been trying to add them to mine since I got back from CAST 2013.  This week I'll use them any time I would have written anything down.

For example:

  • My activity log for the day complete with bug and investigations will be a mind map.
  • Testing Plans
  • Exploratory Charters and the notes/results from them
  • Feature/Bug/Product outlines will also be done up this way. 
I hope to create 4 or more Mind maps a day for the next 5 days. It should be an interesting experience. 


Read on for some dryer text on what mind maps are and how I will be doing them. 

September 27, 2013

Random Data Tool: Wrap up

A week of using a random data tool has not shown the amazing benefit I hoped it would.  This might have to do with the tool that I choose to use, and might have been that I wasn't really using the tool the way it was designed.

I prefer to use self-validating data as often as I can, and thus random data isn't always useful. When it was useful, I used it, but ran into a few blocks. The service that I was using didn't provide all the types that I was  wanting (valid US phone numbers, multiple formats),  or didn't provide enough variation to suit me for the data types it did have (Like name, only used normal english names)  I was hoping that this tool would help drive me into some of the corners of the data set, but all it really did was keep me in the center of the path with different values that didn't diverge far from the norm.

September 24, 2013

Random Data tool

I was originally going to write about a API testing heuristic that I stumbled upon recently for this weeks blog series, but I'm not planning on doing any APi testing this week.  So instead I'm going to write about a new tool/website that I recently discovered for generating random data of various types. TestSpicer provides a set of  post/get api calls that will generate "Random" data for you.  Need users name for a test account? Call the name endpoint. Need an image? There's a call for that.  In Fact there are currently 19 different end points, many I won't use, but this week, I'll try a few and see if it helps me find some a bug by creating an account that I wouldn't.

September 23, 2013

RIMGEA: Wrap-up

Using RIMGEA for the past week has been a great exercise. It certainly improved my bug reports. I didn't get to the point of all my bug reports being fully RIMGEA or the bugs fixes that I reviewed being fully explored using RIMGEA, but I did see a improvement in my bug reports and bug exploration.

Lets look at these two sides of how I used RIMGEA and the challenges that caused me to shortcut using it fully.

RIMGEA as a bug reporting Heuristic:
This is the intended use for this Heuristic, and it works well here. If you are rushed for time, or have a large number of bugs to report, it's unlikely that you will use all of it. For some bugs I didn't get past the R(eplicate). Once I got my replication steps, I could write a convincing bug report.  However, this past week I tried to get further with every bug. My reports were definitely better.  Reducing all the possible variables out for the I(solate) was hard, and often I overlooked some, but looking for them certainly helped my testing in other areas.  I think I'll do this exercise every few weeks until I can consistently get into the last few letters of the Heuristic.  

RIMGEA as a bug verification Heuristic:
This was a bit of an off-label use, but I thought that if the bug hadn't had this done to it when it was written, the fix might not cover all of the incarnations of the bug either. It worked for me. Often replicating the bug in the pre-patched system showed the the bug had been fixed earlier and the new patch wasn't actually needed, or that there were other paths into the bug that had not been examined and fixed.


All in all, this was a productive exercise. I'll do this one again, and share it with my team.

September 18, 2013

RIMGEA: Mid week

This one has been a little easier to do. I'm not trying to use it outside of its scope, and that has helped. Additionally, I know that a well written bug report helps the developers find and fix the bug, but sometimes I really don't need as much information on a bug as RIMGEA would have me provide.  Nor I do not have the time to do all of RIMGEA for the six bugs that I need to file before I leave work for the day. That said, having a note on my desk reminding me to use it has increased the quality of the bug report, just not all the way.

One change that I didn't expect was the change in how I've been verifying bug fixes. Many of these bugs come on from external sources or non testers. Many get to the developers before a testers has had a look at them. Thus when they do get to me, they rarely have replication steps let alone any of the other items outlined by RIMGEA. Filling in a very quick RIMGEA has increased the depth at which I verify these bug fixes to almost a full exploratory testing session. Making the bug fix verifications more through. It's caught a minor issue that I might not have caught otherwise.


So Far So Good with this Heuristic.  More on Friday!


September 16, 2013

RIMGEA: A Bug reporting Heuristic

Last week I looked at a heuristic that helped me with testing, and planning my testing. It even helped me with isolating a bug and replicating it. If that was all that I did, it might be the only heuristic that I need in my daily work. (Un?)Fortunately I also find bugs, and bug reports are one of the main forms of written communication that exists for me or my team. Thus, a well written bug report that has all of the elements can be important, or at least as important as the bug we are reporting.

This week I'm going to concentrate on writing better bug reports by using the RIMGEA heuristic. This heuristic developed by Cem Kaner and gives the user a series of steps to go through to ensure that a bug report has enough information that others can correctly evaluate and prioritize the bug you are reporting.

R, Replicate: Ideally all bugs that we report are replicable, we might not understand all the variables involved in the bug yet, but in order to move on in this heuristic the bug needs to have a series of steps that will replicate the bug.  If it doesn't we need to spend some time coming up with some steps. If these steps only sometimes replicate the bug, try to refine them so that it can consistently replicate the bug or at least describe how often these steps can replicate the bug.  Sometimes how often a bug can be replicated using the same steps is an important piece of information.

I, Isolate: Any given set of replication steps will have any number of things that just happen to be that way, but the bug doesn't need them.  Identify these elements so that we know what is really involved in the bug. This step will help with the next two.

M, Maximize: This is to find out just how bad this could be if everything went exactly wrong.  The version of the bug that you have discovered or have been asked to improve the report for is not likely the worst case version. Since you have isolated the bug you should be able to determine how to make the effect greater.

G, Generalize: This goes part and parcel with Maximize and Isolate.  Since we know what the isolation is and how to maximize it, the other side of this coin is what is the easiest version of the steps, or what is the most common path a user could follow to trigger this bug.

E, Externalize: This one is important, not to the bug, but to your ability to inform others of the effect of the bug. To externalize, you describe the impact of the bug to the users, their data and their ability to complete the task that they are trying to complete with this application.

And blandize: Ok, they stretched it on this one to make it an A, but it is an important aspect of any bug report. To find a way to report your bug without making it personal. They could have used O for objective there also and it might have made it clearer to the users

Interestingly, I filed a bug report for work while writing this blog post. My bug reports will have to change a lot to this week. It's not that my bug reports are incomplete, I feel that I write them to the appropriate level of detail for who will need to read them. However, I didn't do some of these steps. It should be interesting.

September 13, 2013

SFDIPOT: end of week Review

Well it's been a week of concentrating on using SFDIPOT in my testing activities.  So did it make me a superstar tester? Did it show me every bug? Was my team amazed at the testing plans I wrote? Could it cure cancer and bring about world peace?  These are all (well mostly) important questions that I have to answer when I ask myself the most important questions: Will I keep using it? Will I encourage others to use it?

 Did it make me a superstar tester? No, but it did have some excellent benefits. It reduced my testing time be giving me better direction when I setup my testing plans.  It gave me better insight into bug investigations. It gave me new ideas of areas to test to add to my list already too long list of areas to investigate. I wasn't really looking for a single item to change and I'd become a superstar, I was looking to improve, and learn. That part of my mission was a success.

Did it show me every bug? No, but I was able to find some sooner than I would have otherwise. I don't think there is any tool or technique out there that will find every bug, the search space to find every bug is impossibly large. So a better metric might be did it help me find more bugs, or find the bugs faster? In my experience from this past week, yes it did.

Was my team amazed with my new testing plans? Not really, for me, it didn't change much. I can see it creating better  testing plans for some, but in my environment, working in a agile development team, I don't often share my testing plans.  My testing plans are usually something I write for myself so that I don't forget anything. Being a single tester on a team has lead to some bad habits, but I'll deal with those another week.

In all it's been a good week. I enjoyed this emphasis on SFDIPOT, and the extra thought it helped me do in my testing activities.  I didn't really use it as much as I meant to, I'll keep using and teaching it to the team of testers where I work, and I'll keep you updated if I learn anything new about it in that journey.

September 11, 2013

Mid-Week with SFDIPOT

Its been two days of my renewed attempt of learning SFDIPOT by using it for all of my testing activities.  So far I have used it in conjunction with a product tour, a bug verification/investigation, and a the test coverage planning for a single feature. I've been surprised to find that it helped for all three.  The bug investigation was the one that surprised me. I thought I might have to heavily modify SFDIPOT to use it, but it has started to flow nicely and more comfortably. Here's some quick thoughts. from the early week.


Understanding a technique takes time, and often trying to explain it to others. I think I learned the most from about SFDIPOT writing monday's blog post.  Michael Bolton and James Bach were both generous with their time and knowledge and review my post before it went live and discussed with me some the place where I wasn't truly relaying what they meant. There are still a few places in my post they are not completely what they mean, but they are in areas where I'm still not sure what they mean.

There's never too small a item to use this technique if it is stumping you. Yesterday I had a bug report come in from the field. The replication steps were simple, except they didn't expose the bug except at the customer site.  After spending some time trying to figure out what the difference was I pulled out SFDIPOT and took the time to do a structured approach to my thinking about my investigation. Very quickly I identified the missing variable and was able to get refined replication steps that reliably reproduced the bug.

The last thing this exercise has shown me is the the structured approach helps me even when I'm working with a old existing feature that I've tested many times.  Today I was able to quickly identify a place where testability could be improved while i was doing a code review and using SFDIPOT to guide my thoughts. I haven't even thought of the test coverage yet, but I already know it will be easier to test because I was able to fully think about the feature.

It's going well so far, I've even had some developers asking about it! I'll be back on friday with the round up!

September 7, 2013

San Francisco Depot

One of the things that I identified last week in my excursion into PCOs, was that while I'm familiar with SFDIPOT, I haven't used it enough for it to come naturally.  Thus my plan for this week will be to use it in every testing situation I can. Even some where it doesn't make sense at the outset. In order to really know the ins and outs of a technique, using it improperly can show you what and why the limitations are so that you can get the most of that technique when you use it.

Reading list:


September 6, 2013

Change of Plans, PCOs and Lean Coffee

I was going to blog today about my week of using PCOs, but it turns out that a 4 day week is not enough time to really get a feel for this technique. I did share it with my agile team, they were confused. I shared it with my testing team and they were very excited.  I'll come back and write the third part of the PCO experience next week at some point.

However, that doesn't mean I don't have something to talk about today!  Today, lets talk Lean Coffee.  I first experienced Lean Coffee at CAST 2013. It was a great experience there and I've been looking for ways to bring it back to Saskatoon ever since. Yesterday I did that in a small way.  

The test team that I lead has a weekly meeting where we get together to discuss various testing related topics that have come up in the past week. This has taken a few different forms, from learning sessions where we have a specific topic that someone teaches about to Highs and Lows sessions where we talk about the best and worst of the past week.  While these meeting were achieving my goal of getting the team together and helping solve their problems, I didn't feel the team was engaged. One member has been consistently asking to skip, and if I was away the meeting didn't happen.  I might have a solution, for my team at least.

Yesterday, instead of starting the meeting as I usually would by polling the room for impediments from the past week, I introduced Lean Coffee. When I was done and they set to it, every person had put forward at least two topic ideas.  While we didn't have a lot of time, the dot voting did direct us to the important items quickly. The few topic cards that were left when we finished were either filed as: 1) lets do this as a one on one later, 2) This can wait until next week, or 3) that other topic card covered what I wanted from this card.  It was a great experience and I felt my team more engaged in the meeting than I have in months.

If you haven't been to a lean coffee, you should try it. It will change the way you think about small meetings.  It works really well for short meetings of people with a related interest to spur on great  conversations that might not happen because you don't know that others are thinking the same things.   I'm hoping to spread lean coffee further at my workplace and even around saskatoon.

For more reading about lean coffee see here and here

September 4, 2013

PCOs, two days in.....

Well not really two days in. It's a short week here in Canada with Monday being a holiday, but I did review some of the reading that I said I would read before starting on this weeks subject. It gave me good insights into what I would be trying to add to my existing processes.

So far the exercise has lead to some interesting insights into the product that I work with, and it certainly will inform my testing and test planning. I have noted however that as I usually work mostly on features and feature integration, a lot of this felt  like extra effort. I'm sure it's not, but time will inform that.

Additionally, as I don't often use the SFDIPOT heuristic, it felt clumsy. I can see that with practice it will become a very useful tool to help my thought process, but I spent too much time trying to figure out what went under each of the headings.

Come on by again on friday for my summary of using PCOs for an entire week.

September 2, 2013

Monday: Product Coverage Outlines

A product coverage outline is not my invention, or my term.  I picked up the term from Paul Holland at CAST 2013, but the concept is something that most of us are already doing. However without formalizing the activity we might not be doing it as well as we could be.  I currently do something like this as a mental exercise, but I don't think I do it very well that way. I think I miss things because I don't write it down or review it.  This week I'll be trying out a more formal written Product Coverage Outline.

A Product Coverage Outline (PCO) is a document (doesn't matter what kind) that you develop to assist you in thinking about the product that that you will test.  It is important to note at this point that this document is about the product, not the testing and it is not the documentation. Paul even suggested that you do not read the documentation until late in the process of creating this document so that it doesn't limit your thinking. There are many ways of doing creating this document and all of them are ok, as long as it is something  you and/or your team developed to help illuminate the product.  I've seen great mind maps that expressed a product coverage outline, excel documents and word docs.  Some of these are easier to read and work with than other, but all did the job. This is because the job is not just the document, it's the process of creating the document the stirs our thought process and helps us ensure that we have thought about as much as possible. That way when we go to create our testing plan, we can consciously choose what's being covered and not instead of some areas not being covered because we didn't think of them.

That's a lot of talking about a PCOs without saying much. What is the goal? The goal is to create a document that you could give to any tester on your team to give them a concise overview of the product that will help them decide how and what to test in the time that they have.  Combined with a heuristic approach to exploring the product you should be able to quickly create a document that achieves these goals.

For this week I'll be using the SFDPOIT heuristic to help me ensure that my PCO's are complete. If you've never heard of SFDPOIT check this out.  My PCO will be done with a Mind Map and have first level nodes for each of the terms of SPDPOIT, but that isn't required for a PCO, but I think it will help.

OK, how about an example? Since I obviously can't show anything from my job, lets take the classic triangle problem.  I looked around and found a triangle calculator here. If I use the SFDPIOT heuristic and a mind map I can quickly create this PCO:


I spent maybe 15 minutes doing this PCO, from it I should be able to come up with a reasonable Testing plan.

See ya on Wednesday!

September 1, 2013

My Plan

Ok, I've heard the biggest problem people have with writing and blogging is not finding a topic to write about but more precisely they run out of things to write about because they their topic is either too broad and they hit writers block or their topic is too narrow and they exhaust it.   I've heard that the best solution to this is to find a topic to write about that you are passionate about and to have a plan.

Well since you are reading this, you can tell my subject matter. I'm obviously here to talk about software testing,  but it's more than that.  I'm going to do a three part explore of my subject matter.  My regular blog pattern will be a Monday/Wednesday/Friday pattern of experience reports. On mondays I'll lay out the new technique  that I'm going to be exploring that week. On Wednesdays the blog will be shorter, as it will be a progress report on how it's going so far.  Fridays will recap the week and explore what I liked about the new technique, what problems I had and solutions if I found them.

 If you'd like to join me on my journey, I'll try to announce the topic for the next week on the Saturday or Sunday so that you have time to do some reading.  This weeks topic will be Product Coverage Outlines.

Reading List:



August 30, 2013

Pressing post before reading

Yesterdays post had some shout outs to two people that influenced me during CAST 2013, but I didn't provide and links to them. Oppps, my bad.

Paul Holland can be found on twitter @PaulHolland_TWN and is a really cool guy to follow, he always had important things to say, and will tell you when you're wrong. He's not always very PC, but following him is worth it.

Aaron Hodder can be found on twitter @AWGHodder. I haven't followed him for very long, but his session on using mind maps as a test reporting tool was great. It's a great use of a visual representation for the coverage of your testing that give true insight into your testing. I'll be trying to do this in the future and I'll share my experience here.

This is turning into a bit of a follow friday, so here's a few more of my influencers from the past week.

@aclairefication is a great tester and blogger who works and writes about being a context driven tester in agile teams.

@mattbarcomb is a process guru. While he's not a tester, he's well versed in process. When you're looking to figure out what's wrong with your process or why your process is the way it is, he's the guy to ask.

@PeteWalen introduced me to lean coffee. What can I say, it had an impact.

Who influences you?

August 28, 2013

CAST 2013

I'm  sitting here in the airport starting to wind down from a week confering, discussing and learning. I took in alot this week, and this blog will be my journey of how I integrate these learnings  into my life both professionally and personally. From Product Coverage Outlines (PCOs, athanks Paul Holland), and mind maps as a test reporting tool (thanks arron) to  how do we communicate with our local communites of testers and others involved in the process of software there were many learnings. Some will be harder for me, some are tweaks to what I already do.

I plan to use these new skills, these are my highs and lows. Feedback is more than encouraged, because it is through the  community that we as indiviuals grow.