Saturday, July 14, 2012

Testing Definitions for testing terms

Here are some testing definitions for testing terms. Whenever you are asked this in interviews, feel free to use them (but don't blame me if you don't land the job). I personally have actually come up with these definitions based on the testing that I have done in time.


Testing Definition ---- Here's a definition for the term "testing definition" itself; defined as any testing terminology that is as the basis for a hiring decision in an interview is defined as "testing definition" :).
Authors' note:- Decided to have a definition for "testing defintion" itself :).

Edge Case or Corner case --- Cases that the tester comes up with and when everyone else in the project realizes that they have missed those test cases, they tend to call it a "corner case"; they also give this term to use cases that would be executed by a very little set of users. That will also help their management understand why these people missed these test cases.

Smoke testing --- Imagine that you are testing an android app on an android phone; Now charge the phone for sometime. If you see smoke coming from the phone when you are testing, then that "tests" that you execute to cause the smoke to come out is defined as smoke testing.

Sanity Testing --- Imagine running a set of test cases again and again; there comes a point in your life where you think you would lose your sanity at the end of a testing session. Such testing that makes you question your sanity is defined as sanity testing.

BVT --- Most popularly known as Build Verification Testing; (Now, don't start questioning as to why it's called "Build Verification Testing" instead of "Build Verification". The experts have decided that it's BVT and not BV.....). The term "BVT" itself has so many flavors --- it can be expanded as "Build Validation Testing", "Bugs Validation Testing", "Bugs Verification Testing", "Blind Verification Testing" where you try to explore the product blindly, "Bug Validation Testing" wherein you can say that you are trying to validate bugs from a previous build, etc. etc...... When someone asks you the meaning of "BVT", try to ask them as to what it expands into.

In case the interviewer is hell-bent on stating that BVT stands only for Build Verification Testing and nothing else, try to pick out 1 of the above definitions.

BAT --- Now, this is defined as the equipment that most people want to batter me with; But this, in today's most companies, is defined as "Build Acceptance Testing". It means that you are accepting a build in it's current form, with the list of bugs so that the build can be ready to be tested. You can also define it as "Bug Acceptance Testing" wherein you try to accept the bug since you happen to belong to the department of "bug-savers".

Stress Testing --- This is the kind of test that causes to get "pretty stressed" when asked to speak about it; more often than not, people usually ask you about "stress" testing and "load" testing. "Stress Testing is defined as the mental state of your mind when someone asks you about stress testing"; "Load Testing is defined as any definition that cleverly defines load testing as a form of testing that has nothing to do with stress testing".

Exploratory Testing --- This is very important to remember; "Exploratory testing" is defined as "any kind of testing that has the words uncharted, unexplored, no requirements, no test cases embedded into the definition". Now, that's the most popular definition. What the world seems to have forgotten is the fact that Cem Kaner has defined exploratory testing.

Rapid Exploratory Testing --- "Rapid Exploratory Testing" is defined as "any kind of testing that has the word of 'speed', 'uncharted', 'unexplored', 'no requirements', 'no test cases' embedded into the definition itself". Again, that kind of definition seems most popular these days.

Usability Testing --- This is simple. When you try to define "Usability Testing", try to make use of the word "end-user, customer, last user" etc. being involved in some kind of testing.

Now, most of the above, as you know are fake definitions; Now, what's a true definition? A true definition is a definition that states the intent of the type of testing, and clearly clarifies the objective of a type of testing.A false definition is one that tries to differntiate the "defined type of testing" from another and tries to call out how it is advantage; In my humble opinion, any time that you try to ask for a difference between 2 forms of testing becomes the seed for such "fake definitions". Every definition is true as long as they clarify the intent. As a great person said, any verb prefixed to the word "testing" would result in some form of testing; And there begins a "world of definitions".....

Monday, February 6, 2012

Automation/Requirements Document/Process/Certification cannot find bugs

Product being tested --- Breath analyser to detect alchohol!!!

Objective of product --- Analyze the air to identify if the person blowing air has had alchohol or not.

What the product did not do --- Analyze if the person being tested has actually blown his air or not.

And the test case --- Get Drunk. totally drunk. get analyzed by the breath analyzer, but don't blow air into the equipment.

And the test case result --- Failed. Since the breath analyzer does not detect if you actually blew air into the equipment or not.

And what's the bug? --- Expected behavior is that the system should detect if the person is blowing air into the equipment or not. Actual behavior is that it does not detect this.

And you won't find this test case in the requirements document; not in boundary value analysis or equivalence method or some such method; no testing certification can help you detect this flaw; no 6 sigma process or CMMi process can help you find this test; and no automation suite can help you prevent it.

In spite of all of the above, this bug has been around in breath analysing equipment for a long long long time. That proves the theory that there are more fake testers than me around :). Anyway, the point I was trying to make was that testing is best left to humans and not to automated suites, or processes, or methodologies. The best tester is still the man, and not the machine!!!

Wednesday, January 18, 2012

SOPA, wikipedia and black days...

SOPA --- This term is doing the round these days and a lot has been written about it already. Today, wikipedia have termed it a black day for themselves.

I interviewed myself today; the objective was to execute only 1 test case to test implementation of the SOPA act when it gets implemented and try to break it in the 1st try. My test case is listed below:-

Test Case --- Search for a wiki page that has blacklisted material and has been in existence for a few years; confirm that the material is blacklisted on the wiki; Do a google search and visit Google cache and check if that information is available. My guess is that it will be available; (I had posted a blog post 2 years back, deleted it a year and a half back and this post is still visible in Google Cache)

Does that mean that there will be a Google Black Day too with Google users protesting to protect their data, if SOPA were to be implemented? :)

Sunday, January 8, 2012

Corporate Lies and Timesheets

All testers have filled timesheets; most people fill out timesheets stating that we work 8 hrs in a day. That is today's biggest lie from the corporates. We all know that it is never ever possible to work exactly for 8 hours 0 mins and 0 seconds; obviously, it would be for sometime more than that or less than that. When questioned, the project manager would cleverly counter that claim stating that he did not work for 8 hours, but that he did 8 hours worth of work on that day; The argument claims that he might have taken sometime more or less, but then the work that he did was work that's worth 8 hours. That becomes the 2nd biggest lie.

If he had the ability to do 8 hrs of work in less than that time, then how could it be 8 hours worth of work? To answer this, the senior project manager would claim the development of components that reduce his working time and improve productivity. And then he would bring in the magic word "automation" to claim that they were able to automate that much amount of time to reduce productivity.

That's the 3rd biggest lie; most automation that's been developed would be screen capture components. The 3rd question is if it reduces the working hours, then why does it not improve billing time and gives the client reduced billing time? To answer that, the client would most probably say that they will reduce billing time, but the tool that's being used is created for intelectual usage and the company has to pay for that tool usage.

And the conversation goes on... The conversation, which started with a focus on quality, ends due to money. In the end, money wins and quality loses!!!

Saturday, December 31, 2011

The new year bug. Are you aware of it?

Happy new year everyone. But do you realize that the high severity bug and the workaround in the new year?


Here's the Requirement --- That the people of the world have a time period that they can gather together to celebrate the completion of 1 full cycle of the earth around the sun.

Expected Behavior - That the sun completes 1 full rotation around the earth at 12 PM on Dec 31.

Actual Behavior --- It takes some more time for the sun to complete 1 cycle; that's why we have the leap year.

End result --- All of us accept this bug; we have changed our lifestyle to have the leap year so that we have a life around this bug; and life does not stop, it goes on.

Most high severity bugs are like this; but what we fail to realize is that there's a workaround every bug. You just need to tamper around the design to make the bug extra-special (like Feb 29) and change life. Not every bug needs fixing; they just need workarounds.

But don't get misguided by me; most often, this would be a failed argument when you have vice-presidents and directors on the other side of the table.

And my Lesson Learnt --- Every bug has a workaround; it depends on how you try to make it sellable as a special feature so that the world accepts the workaround. Else, you better fix it :)!!! Happy new year and happy "testing times" to all of you in 2012!!!

Monday, October 24, 2011

Testing team --- Thank you for the family time that you sacrificed for us!!!

Have you ever heard anything like that? It's a very nice thought that every member of the management should thank the test team for the extra time that they spend at the workplace whenever they have to spend the time.

Doesn't matter if you are the program manager, project manager or whoever... Please take some time to thank the test teams for all the extra hours that they spent in ensuring quality!!! Might have been hours poured over a requirement document clarifying a requirement, might have been hours investigating usage of automation, might have been hours spent when called into work during the son's birthday or wedding anniversary, and it can even be a few hours lost by testing the wrong build... truth is that all of us spend extra time at the work place; not for personal whims and fancies, but to ensure product quality!!! Please take a few mins extra to thank the testing team members for spending the extra time in the project... That small bug they raised in extra time might have saved your product from disaster, indirectly saving your job!!!

Thursday, September 29, 2011

Blocking a Release - Happy or Sad ?

Test teams block a production release. You are the tester who found that bug.

Is it a good feeling to block the release? Or do you cry your lungs out for not getting out a release in time for your customers?

Do you get some sadistic happiness since your work blocked someone's release? Or do you feel sad that someone's work could not get out?

Do you feel happy that the entire org are finally appreciative of your work? Or do you feel bad for your development colleagues who slogged to meet this date and could not meet deadlines?

Do you feel happy that you did not deliver a half-baked product to your customers? Or do you feel bad that you could not have done it earlier?

Do you feel happy that your bosses praise you? Or do you feel bad for your dev counterpart who gets yelled at?

End of the day... blocking a release, results in  feelings... some happy, some sad!!! Yes, even a tester feels sad that a release could not get out in time.... (after  all, blocking a release means "no launch party", right? :)"

But when a launch or release happens in time? Life becomes happy for all... at least till the next launch :) !!!

Friday, August 26, 2011

A tester all life... 7 Questions and no answers!!!

Question 1 - If you ever say the words, "I want to be a tester all my life" to your management, what do you think would happen?

Question 2 - Would they be happy with your above decision because you have long term career focus?

Question 3 - Would they treat you as a visionary because you have a very clear idea of what you want to do in life?

Questions 4 & 5 - Or, would they term you a loser since you have no ambition to grow up the career chain? And advise you on becoming a test architect or manager?

And if you don't have a blog, you are not participative in newsgroups and online forums, you don't market yourself, you are "uncertified", you don't read blogs and magazines, you refuse to use test tools because you don't trust those tools...  and you are a tester for the last 12 years, and an expert one at that.....

Question 6 - Would your management recognise your potential?

Question 7 - Or are you branded as a resource who's not grown up the ladder and gets you replaced?

Well, as the title says, there are only questions... no answers. Only "test cases"... no "expected behavior". At least I don't have them. If you do, please share.....

Saturday, August 20, 2011

Hello World Again!!! Getting rid of Procrastination...

hello world again!!! I'd disappeared for long. I don't think I've updated my blog since Last November. What was I doing? Where was I? Well, most of you know... but for those who don't , I've been here all along. I'd started writing for Testing circus since Jan and they've been kind enough to publish my writings as a regular feature. Other than that, let's just say that I've been trying to change the world. And succeeded in parts... and failed in parts.

Why did I not write? Because I was busy perfecting the art of procrastination. Every day, I told myself that I'd write "tomorrow"... "next week", "after a few days", etc. etc. etc... Having mastered the art of procrastination, i've decided to try and master "how to de-procrastinate". I decided to help myself by not reading a single page of those 1000+ self-help books that are available. Decided to fight the journey myself and came up with this idea ----- "When I see myself procrastinating, I won't feed myself till I complete whatever it was that I'd procrastinated".  And when I get rid of all my other bad habits, I think I'd have enough material to write a book about :)!!!

Does starving yourself until you de-procrastiate work? Well, it did for me... I wrote this blog post :) !!! Helped me with other things in life too...

Does it work for you? If it does, do let me know... then I can become a "de-procrastination guru of sorts", quit my job, travel the world offering advice on correcting yourself :)!!!

NOTE:- By the way, I never said "don't drink coffee"; just don't eat till you complete what you set out to do.... that's all!!!

Sunday, March 6, 2011

Mis-adventures of the fake tester - Part 1...

In the past few months, I have been writing a series of articles for the testing circus. The testing circus have been kind enough to let them published in the column named "Fake Tester's Diary".

The 1st article was on the introduction to the fake tester and the process of induction. You can continue reading this article and much more in Jan's issue of Testing circus available at http://testingcircus.com/January2011.aspx

Monday, December 6, 2010

Brakes and Defect Prediction...

Brake prediction and defect prediction are related. Defect Prediction is defined by me as the ability to predict the number of defects that would occur in the next development cycle. How is Defect prediction possible? Read on for my thoughts.

I did the following exercise. My workplace is an hour’s drive from my home. I challenged myself to predict the number of times I’d brake on a day, while driving from home to office.

What I did was as follows:-

1) From Day 1 - Day 5, I calculated the number of brakes on each day to create a data repository.

2) From Day 6 - Day 15 onwards, I used the information gathered from the previous days to try and predict the number of times I braked every day. I also updated the data repository at the end of every day.

Top 7 of my observations in italized below with their corrosponding learning in Software testing in BOLD inline…

1) Every day, I applied the brake at least once on my way to work.

All of us can predict at least 1 defect in the system. (I guess that's the closest we can get to with predicting defects).

2) Though the traffic conditions and road crowd was same on all 15 days, my predictions were wrong. They had a variance of + and – 50%.

Number of lines of code cannot be used as a factor for predicting defects

3) Regardless of how many times I braked, I always reached office in  60-70 mins. Average time to reach destination was not hampered by the number of brakes.

Your project schedule will not be changed by your number of defects. Rather they are determined by your ability to detect and fix them in quick time.

4) Sometimes, when I braked, I had to remain braked for long periods of time, due to different reasons. Traffic, crowd, signals, etc. etc. etc.

Severity of defects can never be predicted at all

5) Mostly, I thought I’d hit the brakes at the same spots on the days.Wrong. More than 50% of the time, I hit the brakes at different locations.

Defect Predictions becomes a highly mis-guiding factor. It also gives you the false notion that the total number of defects will not exceed a given number. It may be proved false after the application is launched.

6) On 2 days a week at least, I had to take a different path to work. My algorithm to predict defects went haywire on those days.

During the path, teams need to display a lot of agility. Many a time, you have to take a different direction to reach your goal. Your defect prevention algorithm, may not factor in for these change of direction

7) And False Alarms!!! Sometimes, I expected an obstruction and hit the brake, but it turned out to be a false alarm.

You cannot predict false alarms. Sometimes, you might spend a few hours on a trivial or "Difficult To reproduce" defect.

Well, I certainly was not accurate on predicting defects or my brakes, but at least learnt enough information I thought is worth a blog post :)!!!

And last, the fake tester's gyan

1) Dont bother about the future. Worry about the present.

2) Ask yourself the question --- If you had 5 mins at your disposal, would you try to test the product, or use the time to predict the total no. of defects the next development cycle would throw up?

3) And lastly, when we are saying that it's not possible to fully test a product, how can you predict the number of defects in the future?


If the number of defects varies by 50%, then how do you estimate work for dev teams during testing? Well, more on that another day!!!

Thursday, October 28, 2010

Re-defined Definitions...

Someone said annualy 1 billion dollars is spent on Software Testing. I believe more than half of that goes into fake testing :)!!!

Following are some Software Testing Definitions. Please take a minute to read and enrich your knowledge.

Sandbox testing - Software Testing when we have received the build, but the Dev teams have not yet completed coding some of the functionality

Waterbox testing - Software Testing that happens when test team get the build, but for some features, the design is not yet completed

Gasbox testing - Software Testing when the test team receive a build, but the dev team have not yet started coding.

Casual testing - Software Testing on Fridays when the test team is dressed in casuals. The casual attitude to testing results in what's called as "Casual Defects".

Formal testing - Software Testing that happens on Mondays when the test team is fully dressed in formals. This testing type results in "Formal Defects".

Gamma testing - Software testing that happens after alpha testing and beta testing.

Lateral Testing - Software Testing done by a team member who does not belong to your team, but to a different team.

Serial Testing - Software Testing done by testers in serial. Teams test only 1 functionality at a time.

Parallel Testing - Software Testing done by testers in parallel. All functionalities are tested simultaneously.

And by now, if you have started thinking that the above definitions are true, then what's also true is that YOU ARE A FAKE TESTER LIKE ME!!!

Honestly, a year back, I really did think that stuff as above was true. What I did not realize was as follows.

Definitions... are man-made. They are created for the convenience of the author. As a reader, it would be good if you can spend time trying to understand the concept presented by the writer and the intent of the author. It would be much beneficial for you to understand the concept, rather than try to mug the definition and become a subscriber to definitions.

Fake Tester's Gyan

Stop being a believer of Definitions.

Start being a believer of Concepts.

It's about time, you stop believing definitions and start believing concepts!!!

Thursday, October 14, 2010

Interviews – Stop thinking aloud. Start thinking, Channelize your thoughts and Reply…

In the past few years, I have done quite a number of testing interviews. Some of the questions that I've asked are:-

"How do you test a pen?”, “How do you test a mobile?”, “How do you test a remote controller?", "How do you test a random number generator?", "How do you test an application that generates the fibonacci series?" etc.

When I asked them "How do you plan your testing for a website selling mobile phones, interacting with 3 suppliers?", none of them paused to think. The answer came immediately like below.

“I'd plan for sanity testing. Will plan for testing the site against XYZ interfaces. L&P Testing needs to be a part of the test plan. I'll have daily stand-ups. I will talk about cost and variances to think about estimation. I'll have a risk plan for managing risks proactively. I will do a requirements-traceability-matrix..."... and he'd go on and on and on.

After talking to many candidates, it struck me that most of them, when answering the above questions, did not pause to think; or ask for time to think. Though it seemed that they were answering the question, they were only "thinking out their thoughts aloud".

Thinking a bit more, I guess the best way to answer such questions, in an interview would be in the following 4 steps:

STEP 1:- Think.
Ponder about the question for a minute and think about the answers and various possibilities for the next couple of mins and speak up when you are prepared to answer. If you want more time, please ask the interviewer for time.

STEP 2:- Channelize your thoughts.
Think about the solution and channelize your thoughts to ensure that your answer is structured correctly, or how you want it to be structured. A structured answer, will definitely earn you a lot of brownie points with your future employer. If you want, write down short points on paper before you start talking about the answer

STEP 3:- Prioritize the reply and speak it out accordingly.
Go ahead and speak up and start answering the question. If required, refer to the short points while you answer. If you need more time to think, please ask for more time.

STEP 4:- Invite him for discussion on your reply
Ask if the interviewer has any questions or invite him to discuss the finer points of your answer. Try to give logical reasons for your decisions for prioritizing.

FST Gyan section -
If you are being interviewed, then
Start ---> Asking for time, if you think you need time.
Stop ---> thinking out your thoughts aloud. Interviews are not a forum to think aloud. Secondly, interviewers will definitely be impressed if you ask them for time to think.

If you are the interviewer, then
Start ---> Asking the candidate to think and reply.
Stop ---> asking questions when the candidate immediately answers to your questions. Most probably, he's not answering the question, but "thinking out aloud"!!!

And yes, as always, have a happy interview!!!

Tuesday, September 28, 2010

“Non-Reproducible defect” – Non-occuring or “Unable to recreate?”…

Non-Reproducible defects, sometimes, are even more lethal than existing defects. If ignored, you can never predict when the non-reproducible defect would rear it's ugly face and kill the product. Trying to present a personal perspective of "top 5 myths" around non-repro defects.


Myth 1: That non-reproducible defects are harmless

Not really. When it's happened once, it's more likely that it can happen again. Non-occurance in a protected environment does not mean permanent non-occurance in production. Secondly, can the defect can be ignored since it cannot be re-produced? Definitely not. Test around areas of the defect to explore it further. You never know what you can unearth unless you dig further!!!

Myth 2: That the development team have fixed the non-reproducible defect

A personal favorite. I have heard many stories of dev fixing non-repro defects. How do you verify a bug-fix that's not reproducible in the first place? I don’t know. I have never figured it out. Maybe they know something that you don't. If you have good interpersonal relationships with the developers, you should be able to talk to them to understand the root cause to try and re-create the error conditions to reproduce the defect and confirm if it’s fixed.

Myth 3: Large number of non-reproducible defects? No, it doesn’t’ mean much

Maybe a large number of non-reproducible defects means the following
a) Test Conditions are varying due to test environment instability
b) Testers have not understood product design
c) Dev and testers don’t talk to each other
Whatever it is, it means chaos!!! And that's a ill-omen and bad-health for the product

Myth 4: That the non-reproducible defect is a scenario that's never happenned before

If it’s an application in production, (a maintenance application), maybe it’s happened before. Look around.
a) Talk to the support teams to check if it's happenned in the past.
b) Talk to the veteran project members to see if similar defects have been unearthed in the pasts.
c) Look into the bug database.
d) Check the support log files (if you have access to support calls).
e) Network with older project testers who you are in touch with (even though they have moved to other projects or companies).
f) Maybe it's happenned earlier in life and you can get some info from earlier people. Also talk to people who have tested the application earlier to see if it's happenned in the past.
Remember, History is a great teacher!!!

Myth 5: That the sole owner of a non-reproducible defect is the tester

Non-Repro defects are mentinoed as errors of the tester. I Disagree. It's more like a case of a person being punished for trying to reach objectives beyond boundaries. When a tester tries to invest time in trying to showcase a rarely occuring problem, we are actually punishing him by thrusting ownership on him. Metrics crazy companies have assigned metrics for each developer and tester, and I am pretty sure that this results in the tester being blamed if a defect’s non-reproducible.

My take --> I feel that this defect should be owned by the entire project team. No single person ownership for non-reproducible defects.

Fake Tester's Gyan

1) Dont' ignore them

Most of us tend to ignore the defects which cannot be reproduced again. It's akin to commiting suicide. Please ensure that you address all non-repro defects before going live. Ensure that the right people get to view them and discuss them accordingly, before deciding on next steps of these defects.

2) A non-repro defect session

Why is it non-reproducible? Have a session with the entire project team to brainstorm regularly on all non-reproducible defects in the project. Maybe we would understand patterns on why a defect is non-reproducible!!!.

3) Business Severity

And look at business severity, as always. If it's high priority, then please spend some more time looking into areas. Try defining test conditions. Look for cases outside the requirements. Maybe it’s the slow system, slow connection speed, memory leak, etc. that’s resulting in this behavior. Classify high priority non-reproducible defects into a bucket and spend 1 hr a week analyzing them. It's better spending time on bug research now instead of doing it after the system goes live.

That's all!!!

As always, happy testing!!!

Monday, September 13, 2010

Happy Programmer's day!!!

Hi Programmers,

Best wishes from the fake software tester for a wonderful and bug-free "Programmer's day" :) !!! Yes, for those who don't know what I am talking about, Sep 13 and Sep 12 (in a leap year) is celebrated as Programmer's day, at least in Russia!!! And I am not surprised.

The logic behind this is to celebrate the 256th day of the year as the Programmer's day. As you know, 256 was chosen since it can be represented with an 8-bit style. But, I have the following question behind celebrating the 256th day.

Don't you guys think that your counting should start from Day-0 and celebrate Day 256 (which'd fall on Sep 14 and Sep 13 on leap years) as a programmer's day? Your guess is as good as mine. Maybe we can log a defect against the above logic :) !!!

And What about a tester's day? Maybe it can be celebrated the following day, for the simple reason that testing follows development. Happy Tester's day too!!!

Friday, September 3, 2010

Look Closer…He's a Manager, not a mentor!!!

Today's world expects managers to play a mentoring role to all of their team members. But this expectation seems fundamentally flawed. It is actually very difficult for a manager to be an "effective" mentor to his team members. Why? Why? If you are really interested in my opinion, please read-on.

1) Difference in Priorities
A manager's priority would be the company's good health. A mentor's priority would be your good health. An example is -- your manager would never ever advise you to quit your job to reach your goal, your mentor would!!! Basic difference in Priorities is the 1st hurdle for a manager to be a mentor.

2) Interests Vs Loyalties & Following your Dreams
When your interests and corporate interests clash, a manager would advise you to do what's beneficial to the company. A mentor would nudge you towards what'd be beneficial to yourself.

For example, if you are a very good black box tester in a manual testing company, with an aspiration towards networks, a manager would show you a nice career path in the direction of black box testing, while your mentor would ask you to follow your dreams. A mentor would most probably say --- "Use this job for sustenance, do a course in networking and join Cisco"!!!

3) Favoritism towards top performers
Managers are bounded by loyalties towards old-timers, loyalists & top performers. The manager does not invest time in a weak performer. But Mentors don't let down a weak performer. You would want a mentor, who would not let you down, don't you?

4) Are you his Competitor?
When your mentor becomes your manager, the following question might creep into your mind --- "Is my career not progressing because he thinks I am his competition?” Somewhere down, your mentor, your manager, becomes your dreaded enemy and sadly, a rival!!!

5) And the Credit Crunch...
When your mentor becomes your manager, who do you think gets credit for a successful launch, and who do you think gets blamed for failures? Work becomes more difficult when this question takes root into your head. As we all know, a mentor would always take the blame for failure and credit you for success.

6) Confessing your faults...
It's very difficult to confess your weaknesses and faults to your manager. You would have the belief that it'd come back to haunt you during your "appraisal time". But, you would never have this fear confessing this to your mentor!!!

Those are some blockers that I could think of... as to why managers should not try their hand at mentoring... They are called Managers... and should strictly stick to Managing!!!

And no, I am not even hinting at asking you to get through life without a mentor... That borders on insanity!!! You always need a mentor to guide you in the right direction.... The results can potentially be hazardous when your manager becomes your mentor... or vice-versa!!! And below is my suggestion to solve this.

The Internet has shrunk the size of our world. That means it's possible for you to reach out to anyone you want who resides at another corner of the world. So, please reach out... scan the entire world to identify your mentor. If you are looking at your manager to do it, then...keep in mind, he's a Manager first...then a mentor!!!

Monday, August 16, 2010

Fake Software Tester’s Guide to attend meetings…

The title to this post tells what this post is all about. A common scenario as you grow up the perceived corporate ladder would be a need to attend a lot of meetings. Once you grow above the managerial level, plan to attend at least 4 hrs a day in non-productive meetings. And some of these meetings, you would have to attend even though you have no idea on what it is all about. And this post is about surviving such meetings.

Listed below are a few survival tips for such meetings – wherein you don’t have an idea on the topic being discussed.

1) The sandwich Blackberry method
A large sandwich and a blackberry can be your saviors. Walk into the meeting with a large sandwich and constantly, keep fiddling on your blackberry for the entire duration of the meeting. Walk out of the room every 10 minutes to attend to that very important phone call and keep walking back inside. You can be rest assured you won't be troubled by anyone for the duration of the meeting.

2) The copy-Paste method
Before the meeting, grab your team member who's attending it and ask him on what he's going to talk about. Ask him a few questions and ensure that you remember all his answers. Send him off on any errand to ensure that he doesn't attend the meeting. Now, re-state whatever he's told you and answer the same questions that you've asked him. Nobody would disturb you after you've made your point.

3) The Magic words
Remember these words - Bottom Line, Bird's eye view, Revisit, Thinking Hat, out of the box, Paradigm, Game Plan, win-win, Leverage, etc.
The trick is to pick 2 words from above and form a sentence, at any time during the meeting. Some examples I have are below:-

a) Guys, it's time to put on the thinking hats to think of a win-win proposal
b) Folks, let's come up with an out-of-the-box solution, which is pro-active and leverages our strengths.
c) The Bottom Line is that we need to revisit if we are doing things the right way. Take a step back to have a Bird's eye view of the solution.

You can try this exercise yourself and within a few practice tries, you'd become a master at it. You can create such statements from these words randomly and you'd sound like a veteran of senior management.

4) The Blame-it-on-others Method
Start off a conversation at the beginning of the meeting to talk about bad food in the canteen, rising fuel bills, lack of facilities in the company, bad candidates lined up by HR for interview, unrealistic client demands, etc...

5) Google will help you
If it's a meeting about a product evaluation, do a vigorous Google search 1 hr before the meeting. Scribble notes from all over the internet and present them as your viewpoint. Mostly, nobody would know and you can escape unscathed.

And my pick goes to the method described below... the best thing to do is....
6) Being honest
Yes. The last way is honesty. Stand up and tell everyone that you will be unable to provide any justice by your attendance, since you have no idea on what's being discussed. Tell everyone that you think you'd be better off doing something productive, than be a part of it and walk out. That's not something that happens very often, but I guess it's the most honest way.

If you can think of more such practices, I'd be happy to hear them and it would be great if you can post them in the comments section!!!

Well, what about those telephonic meetings that you need to dial in to attend??? Well, save that thought.... more on those another day!!!

Thursday, August 5, 2010

The Defect I Leaked

Earlier in life, in the late 90’s, I was hired to test a software application's installation. After the usual late hours, Development fights, etc., I certified that the product works fine and Okayed the release.

What Happened then?
Within 1 day of going live, the support team was flooded with a zillion calls saying that the application does not get installed. All the installations aborted with the error message - "INVALID OS". The products had to be re-called, the defect fixed and re-shipped.

Root Cause on Investigation
The installation software had logic to determine the version of the Client machine's OS. Both Dev & Test assumed the location of this entry in a particular location, while most of the live systems had this info stored in a different location in the registry. As a member of the test team, I did not do sufficient research before testing and missed out this important detail, which at that time seemed insignificant to me.

Who's to be blamed for the Fiasco?
3 People who were responsible --- I, Me & Myself.

Defect Leakage --- A 1st for me
It was the 1st defect I leaked which had very high business impact. The fault was mine, mine & mine. Like so many other people in the project, I assumed that the operating system version is stored at a particular location only. I assumed that it cannot be stored elsewhere and so, I DID NOT TEST FOR IT. BY ASSUMING, I LEAKED A DEFECT!!!

Till today, the bug haunts me every time I take up a testing assignment. It is the fact that I can still leak a defect makes me more determined to ensure that I don't leak defects at least now.

A Fake Software tester...
--- Never admits his fault to a defect that he's leaked
--- Always thinks someone else is to blame for a defect that he's leaked
--- forgets the leaked defect too soon in life and invariably, leaks many more
--- Tries to pass the root cause for defect leakage due to a different reason --- bad environment, bad specification, bad coding, time pressure, no requirements, etc. etc. etc....

In every interview of mine, I ask the test engineers who are applying for the job about any defect that they'd have leaked earlier. And to date, most of them have said that they have never leaked a defect in their entire life!!! This is coming from testers who have an experience of 6-10 years in the Software Testing Industry.

Do I see an entry here for the files of Ripley's believe it or not? (Personally speaking a tester who's never leaked a defect can be equated with a developer who has never had a single bug filed against his name).Unbelievable!!! Truly, Unbelievable!!!

The truth is that even experienced testers leak defects. And only we know about the defect that we leaked. And we have to ensure that we have to remember a leaked defect all the time to remind us that we are also vulnerable, so that we will at least try to avoid leaking defects in the future!!!

Fake Tester's Gyan (of course, the gyan giving is very important too)

1) Admit Blame --- Yes. Admit blame at least to yourself, if not to the entire world!!!
2) Shameful, it is not --- Though you might feel some shame in admitting a defect, trust me, it’s not. What’s actually shameful is NOT TO ADMIT IT TO EVEN YOU!!!
3) If I leaked defects, am I a failing tester? --- A common misconception is that you'd seem a failed tester to your peers if you admit it. You are not failing. You are the only one speaking the truth and you are sub-consciously scaling the success peak by your honest admissions.
4) It takes Guts. Do you have them? --- It takes guts to admit a defect you leaked. Am very sure that you do have a leaked defect. But do you have the guts to admit it?

If you are mentally strong enough to admit to a defect that you leaked, please feel free to share it on the Comment section below. If you are not bold enough to share it with the world, spend 10 mins in retrospection, admitting it to yourself!!! To own up to yourself about blaming self for a leaked defect makes you a better person; and a much better tester in the long run!!!

Happy Admissions, if you dare to admit!!!

Friday, July 9, 2010

To Trust, or Not to Trust - Part 1 of 7...

"To be or not to be --- That remains the question..." - started Hamlet in his soliloquy. But, what he did not know was that he was being watched by a few other people when he was delivering it.....

Similarly, a software tester also has to, over time, build credibility with other team members – Be it peers, or bosses, or bigger bosses, or sub-ordinates. Having said that, a tester should also know a list of people he can trust.

Trees have been felled in the go-green world on writing millions of zillions of articles on why the test teams need to build credibility with other team members. But, the question that’s posed here is -- Have the other teams built their credibility with the testers?

Need for the Question
Is there a need for the question posed above? Why does the test team need to know which other team have built their credibility with the test teams?

Well, to cut a long story short, let’s take a quick look and ask the tester the following questions, before he starts testing.

1) What’s your confidence level on the fact that there will not be scope creep?

2) What’s your confidence level on coverage of non-functional requirements?

3) How sure are you that the defect is fixed, when the developer checks-in code at 11 PM?

4) Do you believe that the Dev teams have completed the code review? Or are they under pressure from senior management to start testing at the earliest and have turned a blind eye to the code being reviewed?

5) A problem has occurred in production and you have been asked to call the support engineer. How confident are you that the production support engineer has given you all logs? How confident are you that he’d support you?

6) You have goofed up big time. How confident are you that, when you confide this in your boss, your boss won’t blow it out of proportion, but would correct you?

7) You have just conveyed the biggest risks to your project managers, whose objective is on-time delivery. How confident are you that they would have conveyed the risks to the clients?

And the answer is…
The answer to all would be a confidence %. Or confidence quotient. And that directly will be more, or less, depending on how much credibility that the concerned person has built with you over time.

For the software tester, life is tough, since people start noting him only at the fag end of the test chain. He is the last line of defense and has to handle all the pressure.

During this time, the tester will need to make decisions on who can and cannot be trusted. How is this decision made? Only on the basis of your relationship with the person and how they have reacted to a similar situation in the past. And that’s why you need to know who have built their credibility with you.

I will try and pen my thoughts on future posts, on what can potentially happen when there’s lack of trust between the tester and the peer. Will try to post them in another 7 or 8 short boring posts…..

Till then … to Trust, or not to trust... remains your question to be answered!!!

Wednesday, June 16, 2010

Software Testing and the "Hand of Clod"!!!

England Played USA on their first FIFA 2010 World Cup match and most of the world saw Rob Green's (England Goal Keeper) mistake that let the USA escape with a draw.

The next day, the entire world have started to blame Rob Green for his mistake, attributing the goal only to his mistake. People are contemplating firing him. We even have seen the "Hand of Clod" headline in many newspapers.

I am not trying to say that the England Goalkeeper is not at fault, but let's try to answer a few questions below...

1) Who let the possession of the ball to USA?
2) Where were the mid-fielders and why did they let the USA Offense inside?
3) Where were the defenders and why did they let the USA Offense so close?
4) Why could the defenders not intercept the shot to the goal?
5) Who was assigned to mark the scorer? Where was he when the scorer scored?
6) Who selected Rob Green to play in the match?
7) Who decided to buy the gloves that the Goalkeeper was wearing? Was it tested for circumstances wherein the ball meets the gloves when the gloves are wet? What were the test results and who tested it?

And so many many many more questions.....

If the answer is not "England Goalkeeper" for any of the above questions, then I don't think that it is right to blame him for the goal.

What is important is - a part of the blame is his; even more important is ONLY A PART OF THE BLAME IS HIS. The rest of the blame has to be owned by other members of England World Cup Soccer team.

Trying to draw a parallel between this and today's software tester, now think about a scenario wherein a tester would have leaked a very easy to detect showstopper defect.

Writer’s Irritating Personal Note:- My opinion is that all of us (the readers and writer of this post) have leaked defects in the past;but only a few honest testers would admit it. Whoever claims not, is a "fake tester"!!!!

Coming back to the story, the entire team blame the tester for leaking a very easy to detect showstopper defect.

Nobody questions the following which could have resulted in the defect...

1) Wrong Reqiurements sent by the Client --- We cannot question the client, can we?
2) Design problems which would have resulted in the defect
3) Lack of Unit Testing or Test Coverage
4) Waking hours of the tester before he detected the defect (What if he's been testing continuously for 12 hours and let the defect slip in the 13th hour of testing?)
5) Scope Creep in form of Change Requests
6) The boss who was breathing down the tester's neck when the tester was testing

and a million zillion other questions!!!

As part of "Most Convenient for all" Corrective action, the test team or tester gets fired and the world is happy!!!

Question yourself. Is that a corrective action or a "convenient" action?

My take on the incident
Now, I am not saying that Rob Green should not be blamed at all. He is definitely to be blamed. It was his job to stop a goal and he let in the goal.

But, what I am saying is that he is not the only person to be blamed. Likewise, the software tester is not an individual to be blamed when a defect is leaked.

Until and Unless the entire engineering team stands up and takes responsibility, such defect leaks will continue and recur sometime in the not so distant future!!! I think, that's what teamwork is all about... taking someone along with you when the going's not so good... If that does not happen, then i think we need another blog on "fake teams" ;) !!!

The Defects should start and stop with the entire team. If you are a part of the team, then the defect owner should be you, YOU and YOU!!!!! Until and unless you try to own a defect that you have come in contact with, defect leakage (and goal leakage) will continue!!!!!