Scripting

I read a stimulating blog post by Michiel de Mare entitled “Is Your Program Perfect”. The post was essentially a bunch of questions that made you consider whether there were any improvements you could make to a program to make it better. There was a section on testing that asked whether your program could be scripted from the command line. It also inquired whether your program exposes an API. I thought long and hard about these questions as I considered my projects own suite of applications. And I determined they were very weak in this area. The question now is how do we make changes to the programs to improve in these areas.

In the past I had played with WinRunner by Mercury Interactive. This tool pretty much allowed you to automate testing of any Windows application. The tests could be scripted and create easily by recording and playing back user actions. However this required you to purchase the expensive WinRunner tool. Furthermore this tool is going to be retired in a few years. It would be more economical and beneficial for our applications to provide better test and scripting support itself.

Currently one of our applications has an API that allows you to instantiate at least one of its screens programmatically. However it is a very cumbersome API. You have to create a data on the file system or in shared memory. It also requires clunky windows messages to initiate the actions. This support only exists in one of the five applications in our suite. And this support is for one very specialized part of the app. It would be great if a more comprehensive and simple API could be produce to control the application for test purposes. I envision this to be of great use during the unit test phase.

Obviously it will take a while to add the hooks to our applications. But we could start small and work our way up from there. Perhaps the best place to start would be to automate access to the new feature we are currently coding into the application. We should be very familiar with the design and implementation of the new code. And we already have time in the schedule to work on these pieces. During the unit test phase of these features we could add in the automated support and actually use it for unit testing.

All of this introspection was due to a small piece of Michiel’s blog post to question whether your software is at its best. Opening our apps up to automation may go a long way to simplifying our unit test strategy.

White Space

Our client has a system acceptance test team. Recently they approached me regarding a problem our load software had last year. They knew it had something to do with the input data containing spaces. And they were trying to determine the impact on the system, and whether they needed to do any additional testing of it this year. I gave this problem some thought and told the test team what I knew.

There is a certain field in one of our input files that is 4 characters in length. We are required to read this field, trim off any white spaces, and store the result in the database. This had worked properly for a number of years. A developer rewrote the load routine using a new tool. They did not correctly follow the requirements and trim the white space from the field. As a result, post load processing could not match the loaded value up with other values that had trimmed. Additionally the application that queries the data could not form the match either.

It took quite a bit to resolve this problem. We had to correct the loaded data for millions of records. Then we had to run the data back through post loads processing. Finally the original load software was corrected to deal properly with future files loaded. It was this last part that the test team wanted to verify. It was good that they are paying attention. Since we only load this particular file once a year, nobody other than the developer tested the fix to the loads code.

The test team was happy to hear the information I provided them. They were immediately running off identifying test cases and planning to generate some input file data to check out our code. We have an internal test team at our company. For some reason I doubt they also have this type of insight and premonition to verify that development follows through on the fixes we produce. Its always easier to deal with problems during test before the software goes out into a production environment.

Access Levels

Our application suite supports a number of access levels. Each user is assigned one of five access levels. Each level has a uniquely defined behavior in the application. In general, the higher the level, the more you can do. But higher levels do not always have the ability to do everything a lower level user can do.

There are a number of times when people believe the application is not behaving correctly. Frequently this is due to an ignorance of what each access level is allowed to do. Normally this happens with the newer developers or testers. It usually takes a senior developer to sort out the problem and trace it back to an access level issue.

I have been working with a new tester recently. He is testing out the application that I am responsible for. He has found some weird problems in my application. There is a lot of interaction between us since I often need to know exactly how to reproduce a problem, or what specific data the tester is using to validate the application.

The other day I asked the tester to reproduce the problem he had documented. He had to do a lot of work to reset the application access levels to the ones used when he encountered the problem. He told me that he was doing a lot of testing in the application at each of the access levels. This was recommended to him by a senior tester. This encouraged me. I have to admit that, while conducting my own unit tests, I did not test all the functionality at each of the access levels.

This emphasizes the differences between developers and testers conducting verification. A developer may be quick to declare success once one test passes. Or sometimes a developer will assume that some tests will pass based on the results of one other test. A tester may also have this opinion. But they will actually perform the tests to verify the hypothesis. However a developer may be quick to skip steps that seem repetitive and boring.

It is a good thing that we have an internal test team to keep development honest. Their absence would not be a total loss. Our customer has their own system acceptance test team. And since we are migrating to all new development tools this year, the users themselves have scheduled a functionality test of our upgraded application. I have the feeling that I am going to be researching a lot of trouble tickets this year. Since I like fixing bugs, this is a good thing.