Friday, January 15, 2016

A Fable: Thinking Out of the Box, part 2

[This is part 2 of my previous blog post “A Fable: Thinking Out of the Box”  please read part 1 first]

 

 

A week has passed since your “out of the box” experience and you haven’t stopped thinking about it since.  Every day you have been thinking, pondering, quietly reflecting on the very idea that there is professional wisdom “out there” that not only don’t you understand, but seems to be in direct contradiction to everything you’ve ever known about QA.  Determined, you decide to explore this a little more.

 

Once again, you rise up above your box, hovering over the landscape.  Floating back to that strange-idea neighborhood, you hope that this time you might be able to meet an owner of one of those boxes.

 

Good news! As you arrive, you see a scholarly looking bearded man next to the oldest looking worn-out box.

 

“Greetings, Tester!” he warmly welcomes you as he looks up from his smartphone.  “I was just updating my Twitter feed.  You know, to stay relevant in this testing industry you need to keep on top of the latest innovations and controversies.  There are amazing conversations about software testing happening in the #testing hashtag every day.  For instance, this #stop29119 petition I was just reading about.  But enough about me.  I see you came from the other side of town.  How can I help you?”

 

“Well,” you begin, “you see, I was just wondering how it could be that you and your box look so wise and experienced, yet, I saw that your box contains some strange sounding wisdom.  So I wanted to understand what that’s all about.”

 

“Very good!  Please give me an example so I can help you”, he says.

 

“OK, well how about when you say, ‘Regression testing is a waste of time’…how can you say such a thing?  We need to run our regression suite each time the product is updated to make sure that no new defects have been introduced!”

 

The bearded man smiles.  “A lot of people have trouble with that one”, he replies.  After pausing for a few moments, stroking his beard in deep thought, he continues.  “Think of it this way.  If you’re concerned that there may be a problem with the updated build, then of course you should test it.  But that’s not what I mean.  I’m  talking about re-running the exact same tests in the exact same way each time.  What new information will you learn about the product if you do that?  Maybe a little, but whatever you learn it’s hardly worth the effort.”

 

“Yes, but don’t you have to ensure there are no defects?”, you ask.

 

“Look.  Are you familiar with the minefield analogy of testing?  Think about walking through an open field that has explosive mines planted all around it in random spots.  You see the footprints of the person head of you, who has already gone through the minefield.  If you want to avoid stepping on the mines and blowing yourself up, you’re best off walking exactly in the same path, footprint for footprint, of the person that went before you, without any variation whatsoever.  You would be very careful to do that exactly, so that you’ll make it through without anything exploding around you.  The same thing is true when bug hunting.  If you want to AVOID finding any new bugs about the product, then you should follow exactly, without variation, the same tests that have been run before. But when testing, we DO want to find new information.  So when we run a new test we will get that new information, but when we run an old test it’s more likely that we won’t.

 

“Pay attention”, he continues, “because this is the key point”.  “I’m not saying that all regression testing is inefficient.  If you run the same tests a little harsher each time you will learn something from your experiment.  Remember, testing is scientific experimentation, and if you want to learn how to run great tests, you should learn how to design great experiments.  Just like a scientist -  because that’s what expert testers are...scientists.  So, for example, if you use more challenging test data than the last time, or try different features in different combinations or sequences each time, you will learn something and you may find new information to report to your stakeholders.  Isn’t that what you’d like to do?  Does this help you understand?”

 

You’re overwhelmed.  Somehow this makes sense, but you’re still confused.  This is a completely different way of thinking about testing than you’re used to.  It’s a different perspective…no...a CONTRARY perspective than what you’ve known up until now.  Mixed emotions are flowing through you: confusion, excitement, fear, skepticism, anticipation.  “I guess this makes sense”, you finally say.  “But I need some time to think this through.

 

“I’m so glad you said that!” the bearded man exclaims with a huge smile on his face. “Never ever believe anything anyone tells you without thinking it through for yourself! That’s one of the first rules of being an expert.  You are well on your way!”

 

“Thank you.  Hey, it’s late, and I need to get back.  Can I come back here sometime?  I have a feeling that there’s a lot more I can learn from you.”

 

“Are you kidding?  Of course you can…I’d be thrilled if you came by again.”

 

“Thanks again, take care.”

 

You head back home.  You’re looking forward to learning more about this interesting new outlook about testing. You get home to your box, and thoughtfully, open a Twitter account.

 

 

To be continued….

Monday, January 11, 2016

A Fable: Thinking Out of the Box, part 1

Imagine that you have a big box that contains all of your professional wisdom.  Inside is all your knowledge and understandings, your habits and customs, behaviors and impulses about your QA work.  Inside you’ll find words: “requirements”, “validation”, “functionality”.  There are phrases: “best practices”, “compare actual results with expected results”.  And there are sayings you hold to be universal truths: “regression tests should be automated”, “bugs found earlier in the lifecycle are cheaper to fix”, “tests need to be traceable to requirements.”

 

Now, you transcend your box.  You are hovering in the air looking down at your box from above.  You look around.  Right next to your box you see other people’s boxes, and they look exactly like yours and have similar contents.  You feel happy and secure, like you’re in the right place at the right time, proud to be a member of this club.  But you keep looking around, a little further away, and you see a few other boxes that don’t quite look like yours.  You float over there to take a closer look.  You see these boxes, some of them much older and worn out, as if they have been filled and emptied many times over a long period of time.  Longer than your box even existed.  Curious, you peek inside.

 

You can’t believe what you see!  Shock! Blasphemy!

 

“There are no such thing as best practices”

 

Huh?

 

“QA is not about finding defects, but testing is about searching for information”

 

What!?

 

“Regression testing is usually a waste of time”

 

What on earth is this?!  Who’s box am I in?  And how can it be that this is an OLD box, one with more years of experience than my own???

 

You’re scared.  You rush back the safety of your own box, your own world.  You feel betting in your own surroundings, but can’t stop thinking about that other box.  After all, everything you’ve ever known exists right here.  Could it be, possibly, that there exists professional wisdom out there that is different than your own??

 

To be continued…..

 

 

 

Wednesday, January 6, 2016

State of Testing Survey 2016

My friend Joel Montvelisky is conducting a survey for software testers.  According to his website blog, the survey seeks to identify the existing characteristics, practices and challenges facing the testing community in hopes to shed light and provoke a fruitful discussion towards  improvement.  The survey goes live tomorrow, Please check it out!

Wednesday, December 30, 2015

The Inefficiency of Gifts and Software Metrics

The Wall Street Journal (12/24/15, “If You’ve Bought No Presents Yet, These Wise Men Applaud You”) reports that from a purely economic point of view, holiday gift giving is a wasteful practice because it reallocates resources inefficiently.  On average, gift receivers would have been willing to spend on themselves only about 70% of the cost of the present that the gift giver has paid, leaving an inefficiency ratio near 30%.   It’s more cost-effective to give cash or gift cards, they argue, because the value of the cash gift is undeniable. To the receiver, the value of x dollars given is exactly x. (Usually.)

 

Yet, the WSJ continues, not a single economist out of the 54 they interviewed for the article heeded to their own advice.  Every one of them both received and purchased gifts for their loved ones this holiday season.  It seems that despite the challenging hard data opposing presents, the warm feelings of the holiday season take over.

 

I see this a little differently. Perhaps the 30% economic “inefficiency” can be considered as an emotional surcharge built into the cost of the physical present.  How much more a person is willing to pay for a present, above what the receiver would have paid for it themselves, isn’t necessarily a real inefficiency that requires correction, instead, it’s a measureable attribute of the giver’s current emotional status.  Fluffy emotional stuff like love and guilt sometimes merge with hard data like economic efficiency ratios.

 

Which brings us to the tricky world of software metrics.

 

The traditional approach to measuring performance is heavily dependent on quantitative numbers-and-formula-based assessments.  For example, questions like: How many test cases did you write today?  How many bugs did you report?  How many tests did you run?  For how long was the environment down?  typify the hard-data approach to software metrics.  However, there is a hidden inefficiency here, too.

 

Students of software testing would recognize that quantitative software metrics like the questions above are almost always subject to measurement dysfunction, the idea that when you measure particular aspects with the intention of improving them, they will improve, but only at the detriment of other important aspects that you may not be measuring.  Adding context-driven qualitative measures to a traditional metrics program may help.  Instead of only depending on the numbers, a qualitative system looks for an assessment based on a fuller story.  Having a fuller conversation with the test team may provide a deeper understanding of the projects progress and danger points.

 

Like gift giving, there is an emotional aspect to software metrics as well.  Pride, fear, anger, despair, overconfidence, the list goes on.  These aren’t inefficiencies, they are expected and a natural part of human endeavor.

 

 

 

 

Tuesday, December 29, 2015

Result Variables

Over lunch with a few colleagues a few weeks ago, someone mentioned that he recently interviewed a QA candidate that gave the best answers to his questions out of everyone he’d ever interviewed before.  Why?  Because, he said, she was “results-driven” in her method of testing.  It turns out that what this colleague was describing about this candidate, is a particularly useful, fundamental, yet woefully underutilized approach to domain testing. (If you don’t remember what domain testing is, see my previous post)

 

Simply put, the idea is to test for the result variable as the primary objective of the test, in addition to only the input variables.  Say you’re testing a program that adds two numbers.  Typically, you would want to know something about these numbers to input, right?  What are their valid ranges?  What should happen if you entered max+1 or min-1 ?  Is zero OK?  What about negative numbers? Or decimals?  And if decimals are allowed, then to what precision?

 

These are all legitimate questions assuming that your primary objective is to test the inputting of the variables, but it doesn’t reveal much about the result of the calculation.  Even though checking for a buffer overflow on an input filter may be important, it may be more interesting to construct your input data to force values of interest on the output variable.  In the Domain Testing Workbook, Cem Kaner calls this testing for the consequence. “If a program does something or cannot do something as a result of data that you entered, that’s a consequence of the entry of the data.”

 

Consider the same set of questions above, except instead, focus on the result variable.  If you’re concerned about boundary testing, then if the maximum allowable value for the result variable is 100, for example, you have many interesting way to try to cross that boundary. (For example, enter 100 + 1; 0 + 101; -1 + 102; 99.99 + 0.02)  Testing with the primary focus of the result variable can generate a much richer set of tests to work with, and may lead you to discover more interesting information about your program than focusing on input boundaries alone.  This subtle distinction in the way you think about your tests could reap large benefits in the long run.

Monday, December 28, 2015

Ethics of Software Testing

As best I can remember, every company I have ever worked for had an official Code of Conduct. In them, workers are reminded that integrity and ethical conduct are expected in the course of performing their jobs and acting in the best interest of their clients.

 

There are numerous ways in which this applies to us as testers.  Put succinctly: Don’t Do Fake Testing.  Be honest, don’t plagiarize, act congruently.

 

James Bach has a good list of ethical principles and among them are:

 

·         Report everything that I believe, in good faith, to be a threat to the product or to the user thereof, according to my understanding of the best interests of my client and the public good.

·         Apply test methods that are appropriate to the level of risk in the product and the context of the project.

·         Alert my clients to anything that may impair my ability to test.

·         Recuse myself from any project if I feel unable to give reasonable and workman-like effort.

·         Make my clients aware, with alacrity, of any mistake I have made which may require expensive or disruptive correction.

·         Do not deceive my clients about my work, nor help others to perpetrate deception.

 

It’s not just about managing conflicts of interest and reporting outside business interests (although that’s important too.)  For testers, being ethical means we study about the relationship between our products and the world in which they run.  We don’t cheat, we don’t take credit for work we haven’t done.  We allow ourselves to grow by working through our assignments for ourselves.  We put our clients first because we report findings that we suspect may be problems for our users.  Expert testers are by definition, ethical.