October 8, 2013
October 6, 2013
Lets take a look at what my process looked like. Every Day I reviewed my Maps from earlier in the week to see if there was anything I needed to pull into todays map as a item I planned to do. So before my day even started I had a map with a center node that was the days date, surrounded by any tasks that carried over from the previous days and the items I planned to do that day. If there was a task that carried over the the previous day, I would copy the node over with every thing that was under it so that I had my notes from the earlier work to guide my continuing work. I found that my exploratory testing was very much assisted by this exposure to my previous results. I was able to make links across sessions and generate new charters with better insight than I had when I used a notebook for this same process.
As the day progressed, the first level nodes that I was working on would progress through several states denoted by color. White for upcoming tasks, yellow for the currently active task, Red/Purple for tasks that needed to be revisited, and Green for completed Tasks. If that was all that I had done, this would not be as great a tool as it ended up being. Also made notes on the tasks, and did my test planning under the testing tasks. Since I used an electronic MindMap tool this allowed my test plans to be fluid and dynamic based my results. Results from the tests were also recorded in the MindMap as were all of the defects and other issue I noted. Again these were color coded to allow me to make notes about them and not get distracted by opening my bug reporting tool and spending time then reporting, but rather making enough notes that I could come back later to do a RIMGEA analysis and better report.
I found this technique very useful. My entire day was in one place, I could break pieces out to share and collaborate with others, and I missed less. I will continue to use MindMaps in this way, and will continue to look for new ways to use MindMaps in my work.
October 4, 2013
Tune in tomorrow for my MindMaps Wrap-up.
October 2, 2013
As I understand it SFDIPOT is a heuristic to use while doing a software tour to ensure that you have a complete understanding of the application. This is sort of how I use it. I actually do several tours of the application/feature in questions, all very quick and high level while thinking about each of these. I don't do them in the order of the Heuristic simply because I don't really think that way.
So what does it mean?
S: Structure, how is the application put together/wired up. These are things like: Uses Mysql as a server, Has a separate authentication server, Uses memcache and Uses Ajax. Things that can lead you down other vectors that you might not have thought of while doing a simple product tour. You might need to get help/information from the developers and or Documentation for this piece, as such I recommend doing near the end of the heuristic.
F: Function, what are the functions of the feature/application. What are the individual things that it does. This can be a (organized) listing of all of that actions that can be done using the application. These are smaller than the user stories, these are things like "Mark favorite" and "Join mailing list". Stories will come later and will invoke many of these functions.
D: Data, what are the data types involved in the application. Data types can lead to interesting test cases. Have a look at Elisabeth Hendrickson's cheat sheet for some great ideas on how to test your data types once you have identified them.
I: Interfaces, one of the newer letters in this heuristics (and thus harder to get information on), these are the interfaces that the application interacts with the outside world on. Does it have a file based interface anywhere (import/export functions?) Does it store intermediate files to a local storage anywhere? Is an api provided? Does one exists that isn't provided? And yes, the GUI is an interface, does it interact in the ways that a user will expect that kind of interface to interact.
P: Platforms, What are the platforms does this application run on? Is it a java app? Does the version of Java matter? Is it a Web app? Does the browser version or vendor chance the performance or interactions?
O: Operations, What are the operations that the application is trying to accomplish? What is the purpose of this application? What are the mistakes can can happen when trying to do this operation? I've been asked if these are your User Stories and Test Cases, and the answer for me is kind of. User stories are definitely things that are thought of here, but not just the stories, how those stories can go wrong and or get interrupted. Test Cases however are part of your test coverage. I'm using this heuristic to create a PCO which will in turn inform the creation of my test coverage.
T: Timings, Another recent addition to the Heuristic, Timings are what are the time based things you will have to think about with this application? Does your application have to run a specific time? What will happen if it runs on Feb 9th? Or runs over new years Eve? in 1999? What happens when two actions are done too quickly? Too slowly? A recent conversation with also pointed out to me that these can be environmental. Where is the application? What is happening at the same time? Is it rush hour at that time at that location?
As you can see, if you run through all of these, you will have a very extensive list of things to think about while designing your test coverage. As I said in the beginning, this heuristic while seemingly simple does take some practice to use, and that is my goal for the week. To use this heuristic and get better with it. My plan is to use this in my everyday testing, for every feature and bug report from the field that is passed on to me. I realize that this might not align with the it's best uses, but I'm game to try, maybe I'll figure out something new about the heuristic, or about myself.
I'll let you know how it's going on Wednesday!
If I have anything wrong or misinterpreted in this, please let me know in the comments below.
So what have I done with mind maps so far? Two of my common tasks have been moved over to mind maps. Test planning for newly completed features, and my daily activity log. I have in fact combined these two into a single map, one for each day, where testing won't span more than a day. Where testing will be a multi-day or multi tester effort, I've split it out into a separate map. The daily activity map has helped me track my day and make notes on the activities that I do. I've marked with color finished tasks, blocked task, bugs found, future investigations and test results. The color enables me to quickly look and see what I need to remember. I used to do this with my notebook, but it's been hard to find the tasks and future investigations easily later when I go looking.
The tool that I've been using has a mode where it will auto color the nodes for you based on a color change at the leaf level to indicate that you have completed a task or been blocked if you are using your MindMap as a plan/to-do. I don't usually write detailed plans, I write down a list of areas I want to test, and some notes under each of those about why or how of that testing area. This feature the MindMap software that I'm using allows me to quickly look back and see what area's I have left to cover in upcoming charters, and update based on the charters that I've completed.
All in all, MindMaps have been a good addition to my work. I haven't had to share the information in them with others yet, but I expect that to be helpful also.
September 30, 2013
- My activity log for the day complete with bug and investigations will be a mind map.
- Testing Plans
- Exploratory Charters and the notes/results from them
- Feature/Bug/Product outlines will also be done up this way.
September 27, 2013
I prefer to use self-validating data as often as I can, and thus random data isn't always useful. When it was useful, I used it, but ran into a few blocks. The service that I was using didn't provide all the types that I was wanting (valid US phone numbers, multiple formats), or didn't provide enough variation to suit me for the data types it did have (Like name, only used normal english names) I was hoping that this tool would help drive me into some of the corners of the data set, but all it really did was keep me in the center of the path with different values that didn't diverge far from the norm.
September 24, 2013
September 23, 2013
Lets look at these two sides of how I used RIMGEA and the challenges that caused me to shortcut using it fully.
RIMGEA as a bug reporting Heuristic:
This is the intended use for this Heuristic, and it works well here. If you are rushed for time, or have a large number of bugs to report, it's unlikely that you will use all of it. For some bugs I didn't get past the R(eplicate). Once I got my replication steps, I could write a convincing bug report. However, this past week I tried to get further with every bug. My reports were definitely better. Reducing all the possible variables out for the I(solate) was hard, and often I overlooked some, but looking for them certainly helped my testing in other areas. I think I'll do this exercise every few weeks until I can consistently get into the last few letters of the Heuristic.
RIMGEA as a bug verification Heuristic:
This was a bit of an off-label use, but I thought that if the bug hadn't had this done to it when it was written, the fix might not cover all of the incarnations of the bug either. It worked for me. Often replicating the bug in the pre-patched system showed the the bug had been fixed earlier and the new patch wasn't actually needed, or that there were other paths into the bug that had not been examined and fixed.
All in all, this was a productive exercise. I'll do this one again, and share it with my team.
September 18, 2013
One change that I didn't expect was the change in how I've been verifying bug fixes. Many of these bugs come on from external sources or non testers. Many get to the developers before a testers has had a look at them. Thus when they do get to me, they rarely have replication steps let alone any of the other items outlined by RIMGEA. Filling in a very quick RIMGEA has increased the depth at which I verify these bug fixes to almost a full exploratory testing session. Making the bug fix verifications more through. It's caught a minor issue that I might not have caught otherwise.
So Far So Good with this Heuristic. More on Friday!
September 16, 2013
This week I'm going to concentrate on writing better bug reports by using the RIMGEA heuristic. This heuristic developed by Cem Kaner and gives the user a series of steps to go through to ensure that a bug report has enough information that others can correctly evaluate and prioritize the bug you are reporting.
R, Replicate: Ideally all bugs that we report are replicable, we might not understand all the variables involved in the bug yet, but in order to move on in this heuristic the bug needs to have a series of steps that will replicate the bug. If it doesn't we need to spend some time coming up with some steps. If these steps only sometimes replicate the bug, try to refine them so that it can consistently replicate the bug or at least describe how often these steps can replicate the bug. Sometimes how often a bug can be replicated using the same steps is an important piece of information.
I, Isolate: Any given set of replication steps will have any number of things that just happen to be that way, but the bug doesn't need them. Identify these elements so that we know what is really involved in the bug. This step will help with the next two.
M, Maximize: This is to find out just how bad this could be if everything went exactly wrong. The version of the bug that you have discovered or have been asked to improve the report for is not likely the worst case version. Since you have isolated the bug you should be able to determine how to make the effect greater.
G, Generalize: This goes part and parcel with Maximize and Isolate. Since we know what the isolation is and how to maximize it, the other side of this coin is what is the easiest version of the steps, or what is the most common path a user could follow to trigger this bug.
E, Externalize: This one is important, not to the bug, but to your ability to inform others of the effect of the bug. To externalize, you describe the impact of the bug to the users, their data and their ability to complete the task that they are trying to complete with this application.
And blandize: Ok, they stretched it on this one to make it an A, but it is an important aspect of any bug report. To find a way to report your bug without making it personal. They could have used O for objective there also and it might have made it clearer to the users
Interestingly, I filed a bug report for work while writing this blog post. My bug reports will have to change a lot to this week. It's not that my bug reports are incomplete, I feel that I write them to the appropriate level of detail for who will need to read them. However, I didn't do some of these steps. It should be interesting.
September 13, 2013
Did it make me a superstar tester? No, but it did have some excellent benefits. It reduced my testing time be giving me better direction when I setup my testing plans. It gave me better insight into bug investigations. It gave me new ideas of areas to test to add to my list already too long list of areas to investigate. I wasn't really looking for a single item to change and I'd become a superstar, I was looking to improve, and learn. That part of my mission was a success.
Did it show me every bug? No, but I was able to find some sooner than I would have otherwise. I don't think there is any tool or technique out there that will find every bug, the search space to find every bug is impossibly large. So a better metric might be did it help me find more bugs, or find the bugs faster? In my experience from this past week, yes it did.
Was my team amazed with my new testing plans? Not really, for me, it didn't change much. I can see it creating better testing plans for some, but in my environment, working in a agile development team, I don't often share my testing plans. My testing plans are usually something I write for myself so that I don't forget anything. Being a single tester on a team has lead to some bad habits, but I'll deal with those another week.
In all it's been a good week. I enjoyed this emphasis on SFDIPOT, and the extra thought it helped me do in my testing activities. I didn't really use it as much as I meant to, I'll keep using and teaching it to the team of testers where I work, and I'll keep you updated if I learn anything new about it in that journey.
September 11, 2013
Understanding a technique takes time, and often trying to explain it to others. I think I learned the most from about SFDIPOT writing monday's blog post. Michael Bolton and James Bach were both generous with their time and knowledge and review my post before it went live and discussed with me some the place where I wasn't truly relaying what they meant. There are still a few places in my post they are not completely what they mean, but they are in areas where I'm still not sure what they mean.
There's never too small a item to use this technique if it is stumping you. Yesterday I had a bug report come in from the field. The replication steps were simple, except they didn't expose the bug except at the customer site. After spending some time trying to figure out what the difference was I pulled out SFDIPOT and took the time to do a structured approach to my thinking about my investigation. Very quickly I identified the missing variable and was able to get refined replication steps that reliably reproduced the bug.
The last thing this exercise has shown me is the the structured approach helps me even when I'm working with a old existing feature that I've tested many times. Today I was able to quickly identify a place where testability could be improved while i was doing a code review and using SFDIPOT to guide my thoughts. I haven't even thought of the test coverage yet, but I already know it will be easier to test because I was able to fully think about the feature.
It's going well so far, I've even had some developers asking about it! I'll be back on friday with the round up!
September 7, 2013
September 6, 2013
However, that doesn't mean I don't have something to talk about today! Today, lets talk Lean Coffee. I first experienced Lean Coffee at CAST 2013. It was a great experience there and I've been looking for ways to bring it back to Saskatoon ever since. Yesterday I did that in a small way.
The test team that I lead has a weekly meeting where we get together to discuss various testing related topics that have come up in the past week. This has taken a few different forms, from learning sessions where we have a specific topic that someone teaches about to Highs and Lows sessions where we talk about the best and worst of the past week. While these meeting were achieving my goal of getting the team together and helping solve their problems, I didn't feel the team was engaged. One member has been consistently asking to skip, and if I was away the meeting didn't happen. I might have a solution, for my team at least.
Yesterday, instead of starting the meeting as I usually would by polling the room for impediments from the past week, I introduced Lean Coffee. When I was done and they set to it, every person had put forward at least two topic ideas. While we didn't have a lot of time, the dot voting did direct us to the important items quickly. The few topic cards that were left when we finished were either filed as: 1) lets do this as a one on one later, 2) This can wait until next week, or 3) that other topic card covered what I wanted from this card. It was a great experience and I felt my team more engaged in the meeting than I have in months.
If you haven't been to a lean coffee, you should try it. It will change the way you think about small meetings. It works really well for short meetings of people with a related interest to spur on great conversations that might not happen because you don't know that others are thinking the same things. I'm hoping to spread lean coffee further at my workplace and even around saskatoon.
For more reading about lean coffee see here and here
September 4, 2013
So far the exercise has lead to some interesting insights into the product that I work with, and it certainly will inform my testing and test planning. I have noted however that as I usually work mostly on features and feature integration, a lot of this felt like extra effort. I'm sure it's not, but time will inform that.
Additionally, as I don't often use the SFDIPOT heuristic, it felt clumsy. I can see that with practice it will become a very useful tool to help my thought process, but I spent too much time trying to figure out what went under each of the headings.
Come on by again on friday for my summary of using PCOs for an entire week.
September 2, 2013
A Product Coverage Outline (PCO) is a document (doesn't matter what kind) that you develop to assist you in thinking about the product that that you will test. It is important to note at this point that this document is about the product, not the testing and it is not the documentation. Paul even suggested that you do not read the documentation until late in the process of creating this document so that it doesn't limit your thinking. There are many ways of doing creating this document and all of them are ok, as long as it is something you and/or your team developed to help illuminate the product. I've seen great mind maps that expressed a product coverage outline, excel documents and word docs. Some of these are easier to read and work with than other, but all did the job. This is because the job is not just the document, it's the process of creating the document the stirs our thought process and helps us ensure that we have thought about as much as possible. That way when we go to create our testing plan, we can consciously choose what's being covered and not instead of some areas not being covered because we didn't think of them.
That's a lot of talking about a PCOs without saying much. What is the goal? The goal is to create a document that you could give to any tester on your team to give them a concise overview of the product that will help them decide how and what to test in the time that they have. Combined with a heuristic approach to exploring the product you should be able to quickly create a document that achieves these goals.
For this week I'll be using the SFDPOIT heuristic to help me ensure that my PCO's are complete. If you've never heard of SFDPOIT check this out. My PCO will be done with a Mind Map and have first level nodes for each of the terms of SPDPOIT, but that isn't required for a PCO, but I think it will help.
I spent maybe 15 minutes doing this PCO, from it I should be able to come up with a reasonable Testing plan.
See ya on Wednesday!
September 1, 2013
Well since you are reading this, you can tell my subject matter. I'm obviously here to talk about software testing, but it's more than that. I'm going to do a three part explore of my subject matter. My regular blog pattern will be a Monday/Wednesday/Friday pattern of experience reports. On mondays I'll lay out the new technique that I'm going to be exploring that week. On Wednesdays the blog will be shorter, as it will be a progress report on how it's going so far. Fridays will recap the week and explore what I liked about the new technique, what problems I had and solutions if I found them.
If you'd like to join me on my journey, I'll try to announce the topic for the next week on the Saturday or Sunday so that you have time to do some reading. This weeks topic will be Product Coverage Outlines.
- Heuristic Test Strategy Model (pdf) - I'll do an entire week on this later
- Got you covered (pdf)
- Cover or Discover (pdf)
- A Map by any other name (pdf)
August 30, 2013
Paul Holland can be found on twitter @PaulHolland_TWN and is a really cool guy to follow, he always had important things to say, and will tell you when you're wrong. He's not always very PC, but following him is worth it.
Aaron Hodder can be found on twitter @AWGHodder. I haven't followed him for very long, but his session on using mind maps as a test reporting tool was great. It's a great use of a visual representation for the coverage of your testing that give true insight into your testing. I'll be trying to do this in the future and I'll share my experience here.
This is turning into a bit of a follow friday, so here's a few more of my influencers from the past week.
@aclairefication is a great tester and blogger who works and writes about being a context driven tester in agile teams.
@mattbarcomb is a process guru. While he's not a tester, he's well versed in process. When you're looking to figure out what's wrong with your process or why your process is the way it is, he's the guy to ask.
@PeteWalen introduced me to lean coffee. What can I say, it had an impact.
Who influences you?
August 28, 2013
I'm sitting here in the airport starting to wind down from a week confering, discussing and learning. I took in alot this week, and this blog will be my journey of how I integrate these learnings into my life both professionally and personally. From Product Coverage Outlines (PCOs, athanks Paul Holland), and mind maps as a test reporting tool (thanks arron) to how do we communicate with our local communites of testers and others involved in the process of software there were many learnings. Some will be harder for me, some are tweaks to what I already do.
I plan to use these new skills, these are my highs and lows. Feedback is more than encouraged, because it is through the community that we as indiviuals grow.