Automation Saves the Day

We have a new developer working on fixing some bugs. She got assigned a tough part of the system. The actual code is not too difficult. The testing is a real chore. The processes need a lot of requirements to be met before any of the data is processed. I have gone through this pain in the past while trying to test out large changes to this subsystem.

Luckily I performed a lot of tests and set up a data generation system. This system creates just the right data to test out all parts of the subsystem. I found this data generation to have come in handy when the test team could not figure out how to test any of this stuff.

The new developer was running a little behind schedule. She was finding, like all would, that it was next to impossible to test anything in this subsystem. I pointed her to the testing data generator and her problems were solved. Now she is teaching some testers how to use this tool to conduct their own test data sets. This tool is going a long way to making peoples' lives easier. Mine included.

Independent Verification

Our project has an independent testing team. Their goal is to verify that our software meets the requirements and has minimal bugs. The lead of this team recently left the company. Her replacement seems like she has management potential.

We just got through a big release of our software. It was to be delivered on a Friday. I was taking it easy that Friday since it had been a long week. Things were supposed to go out easily, with a formality of checking our production install.

One of the testers noticed some errors in the application. It turns out the changes for the release were not in order. The thing that was disturbing was that the software changes had passed test. Now it looked like the pass was achieved in error. Now the delivery was late and we had an emergency on our hands.

They needed some testing to get done. The guy who was supposed to have completed the testing had already left the company. Darn. The existing testers had little clue as to how they might test the system. What did we do? The developers were brought in to rerun the testing. Sounds like a massive fail to me.

Shopping Cart Validation

Recently I had the task to ensure a mock shopping cart app on the web was working correctly. I decided to play with the darn thing to gain insight on all the things I needed to check. At first glance all seemed well. I added items to my cart. The total amount at the bottom seemed to be incrementing correctly. Then I found some disturbing behavior.

When I removed an item from my cart, it did not always accurately decrease my total price. It did work sometimes. But it did not work most of the time. There was also some weird errors when I removed the last item in the cart. I wanted to go further. I wanted to assist the developer in figuring out what was going wrong.

Initially the incorrect total price seemed random. Then I started calculating the amount it would decrement. Although it was not the amount of the item I removed from the cart, it would decrement by the sales price of some other items in the store. I found that the error would be predictable when I removed items from certain positions in my cart list. Those were the details that let the developer hone in on the exact problem. The cart was implemented in JavaScript. Multiple variables were used in the code. However some of them were supposed to be the same variable. The real lesson from this is that a tester can shine some great light into the nature of bugs. This can help get the bugs resolved faster.