When I saw this image on the internet, I immediately related it to software testing and the times when testers fall for these. In this article, let me try to relate few of the biases with respect to software testing.
“20 Cognitive Biases That Screw Up Your Decisions”
a. Blind-spot bias: Failing to recognise your own cognitive biases.
Are you biased towards a particular testing technique or a test tool even if it is slowing you down in your mission? Think about the test data you prepare. Do you have a pattern that you unknowingly follow? Do you have a preference for a particular approach in testing even though it is a counter? I slowly realised that I seem to have a sweet spot for mind maps and spend a lot of time on them before realising that an alternative to mind maps would save me a lot of time.
b. Clustering illusion: Tendency to see patterns in random events.
This bias is an interesting one. We testers are discouraged from ignoring random events. We dig deep into our bug investigation skills to find out the pattern in seemingly random events. One of the must-read for investigating intermittent bugs is here: http://www.satisfice.com/blog/archives/34
So, what do you do? Do you seek patterns in random events or ignore the random events? Catch 22 situations with respect to this bias? What are your thoughts?
c. Outcome bias: Judging a decision based on the outcome.
If testers were judged based on the number of defects and not on how they arrived at the defect, would this fall under outcome bias? Think about it. Many times, we judge a decision based on the outcome – someone found a security bug, we announce that the application needs more security testing or the tester is a good security tester. Do we even ask the approach to the discovery of the bug? Was it accidental or there was a plan followed whose end result is the bug? How many times have we taken seemingly big decisions based on a statement or an event without understanding the background?
d. Pro-innovation bias: Proponent of innovation overvalues its usefulness and undervalues its limitations
What is the first thing that came to your mind when you read the above line? For me, it is the automation hype that companies and teams give. Automation, when understood and delivered, has given good results. At the same time, we all have experienced the numerous times when someone sold us the usefulness and we were sucked into believing them. Only when we started working on the project, we realised that the limitations were more than the usefulness.
e. Selective perception: Allowing our expectations to influence how we perceive the world
We work with certain developers and over time, we start expecting a certain kind of bugs only from them. Even if there is a bug right in front of us, we got fooled and ignore those bugs as we never expected such a bug from the developers based on our experience (and expectations). We would in fact not even consider such bugs when we are preparing our test strategy!
We software testers are required to be aware of the biases and highlight them in the project. People trust us to give an unbiased opinion. It will be counterproductive if we ourselves are so biased like the ones I highlighted above. What other biases have you seen yourself get into?
While you think, here is a list of some more biases: