I've moved...

This blog now has a new home - please update your shortcuts and readers to: www.jeffkemponoracle.com. Sorry for any inconvenience!

Tuesday, February 24, 2009

Bias in Testing

I've been trying a number of strategies to improve the performance of a very complex form (Oracle Forms 6i) currently in development. We've already done a fair amount of work making the code as efficient as possible, while still being reasonably maintainable, so there don't seem to be any more low-hanging fruit we can pick off easily.

[Full Article]


  1. Jeff

    As well as wall-clock timing of a the form startup, you should be considering resource usage (eg, number of LIOs) and possible scaling impact. If 100s or 1000s of users are going to open and re-open this form during the day, it may be that solutions which shave a couple of seconds off the opening response time for one user actually reduce throughput for many users.

    Regards Nigel

  2. Jeff - you might want to look at the CPU consumption of the f60webm processes, because no matter you do, you're always gonna be at the whim of whats happening on pedro at the time of the test. v$session.process will give you the process id on pedro.

  3. Nigel - good point. While I'm not worried about the database in this instance, I am wondering whether the memory usage will be an issue - I'm not sure how much memory the form will use on the client or on the app server. Thanks!

    Connor - thanks for the reminder, I'll use that next time - might be able to get more meaningful comparisons by factoring out CPU load at the time of each test.

  4. just a thought, too.
    Given the tiny amount of improvement you expect to see (1 second...10% or less), I'm not sure the number of tests you are running is statistically significant.
    I'm primarily a PL/SQL developer and I tend to see more significant changes in performance (though, don't get me wrong, sometimes 10% is plenty) so maybe this is a little outside my experience.
    But I would simply not count on discerning such a small change in the small number of tests you are running, given the variances you are seeing between the results of each set of tests.

    ANYHOO, I've also been in the position you describe where outside forces are mucking with my test results. I'll tell you one thing though, it got my arrival time at work set in stone.

  5. moleboy - yes, I'm sure you're right - what I need to see is a more marked difference (e.g. 2-3 seconds or more, consistently) before I can report success.

    I'm a PL/SQL programmer too, and this one's a challenge because all the normal places I'm used to optimising (queries and DML) are an order of magnitude quicker than everything else, so they don't contribute any significant time to the startup time.

  6. I started out as a forms developer and had many of the same issues you are having (well, at least the inconsistent performance ones).
    I think the only way I was able to test code changes consistently was by setting up my own environment.
    Of course, just because something got better in my environment didn't mean it was going to be good in production, but it was a step in the right direction.


Note: only a member of this blog may post a comment.