The software testing conundrum

Top Quote When Internet software giant Google released a seminal book on how it tests its own software - aptly titled: "How Google Tests Software" - it set the cat among the proverbial pigeons of software developers and testers alike. End Quote
  • (1888PressRelease) July 16, 2016 - Until then, the two disciplines coexisted rather peacefully side-by-side, each knowing its place in the software development universe, and each quite comfortable with its own rules and responsibilities.

    But, then (Google book co-author) James Whittaker blogged about his book thus: "Testing and development go hand in hand. Code a little and test what you built. Then code some more and test some more. Better yet, plan the tests while you code or even before. Test isn't a separate practice; it's part and parcel of the development process itself."

    Picture thousands of developers and testers around the world exclaiming a collective: "Say what!?"

    Testing 1, 2, 3

    The truth is this testing conundrum has been brewing for a while, especially in SA, where both development and testing skills are in extremely short supply to begin with. So, asking developers to suddenly take on a tester's mind-set (and skillset), and vice versa, seems a stretch too far.

    Much of this developer-tester thinking stems from a concerted effort by many large companies - not only software development companies and not only Google - to automate as much of the testing process as possible. To achieve this means testing while developing (as opposed to developing then testing then developing some more), so it follows that developers themselves would be doing most of the testing.

    The net result of all this not-so-insignificant strategy shift was a bunch of big companies questioning the value they get from manual testing, which is all good and well, until they realise how difficult it is to actually find people with the skills they need to do both.

    Again, this is particularly evident in SA (and other developing economies) where the skills base is already low. Local testers tend to be graduates and professionals who either didn't make the grade as developers, or didn't fancy development in the first place. At the same time, SA's developers tend to regard testing as something someone else does once they're done writing their code.

    Scrambled or fried?
    The mistake I see many companies make in trying to unscramble this developer-tester omelette is shifting test automation responsibilities onto manual testers. In other words, they're saying if developers that can test are impossible to find, they will get manual testers to add more value by automating.

    This doesn't work. For all the romance of the perfect developer-tester ideal, manual testing remains a highly skilled and focused discipline, which will continue to play an important role in the software development life cycle, even if there is suddenly a glut of developer-tester skills flooding the market.

    Manual testing remains a highly skilled and focused discipline.

    The same goes for test automation. An important factor often lost in the whole developer-tester debate is the value of regression testing. Because so many local companies outsource their development to offshore development centres in places like India and China, they're often left to deal with new code on a regular basis, which may or may not negatively impact their existing code.

    This is where the real value of regression testing comes in, to ensure new code doesn't break old code. That said, companies can't expect their manual testers to work back through the legacy code to fix the features broken by the newly minted code.

    Even if they could, with new code being updated almost daily in some cases - especially in the mobile application space - and companies increasingly favouring the agile model of frequent incremental updates, manual regression testing becomes as costly as it is impractical in most cases.

    The answer lies somewhere in the middle. As always, it is beneficial to look for the best possible balance between what's possible today, what works, and what can be achieved in the shortest amount of time with the highest quality.

    For example, it makes more sense for companies that outsource their development to also outsource their testing - but not all their testing, because that would be prohibitively expensive.

    In environments like banking and insurance, where any breaks in existing software can lead to substantial downtime and lost revenue, the more fluent the testing, the lower the risk and cost. So, rather outsource the part of the company's testing that also carries the highest risk - regression testing - and stick with the tried, tested and trusted manual testing processes already in place.

    That way, companies are not compromising on quality while reducing risk and managing costs. In the meantime, they can work towards a more modern and robust testing culture through the development and advancement of their existing skills and new hires, which benefits everyone in the long run.

    Resource: http://www.itweb.co.za/index.php?option=com_content&view=article&id=153630

    ###
space
space
  • FB Icon Twitter Icon In-Icon
Contact Information