Sunday, 17 February 2013

Surprizing Facts in "How Google Test Software"



There are quite surprising facts in "How Google Test Software". Some of the surprising and need some thought on type of facts are related to the 'Fatal Flaws in Google’s Process".

What are the things that google find fatal and others don't in their process and why they find them fatal?

  • The first fatal flow that none of us finds fatal when we look through traditional testing is that if we ask a developer what he is doing about quality, then his answer often is “testing”. 
But book says that quality is not lying in testing. It has to be embedded to the product it self. As the developers are free to think that the tests are going to be done by testers and should not be considered as a burden to them, and then what happens is that they start to reduce their testing. This should be avoided.

  • The second fatal flaw is that the developers and testers are separated and they are walled by different organizations at Google. Which is sometimes a prominent feature in most of the other organizations.
Mostly the testers who work at google are identified with their job and not the product that they are working on. But it is said that, a sign of a healthy organization is when the employees say, that they are working on a particular product, rather than telling their designation. Development and testing should not be separated  If it happens then a role-based association is created and testers find it really difficult to assign themselves to a particular product.
  • The third fatal flaw is that, the testers often embrace test artifacts than the software or the product they are creating.
Counting the number of bug reports by a test engineer and being happy over them is not the thing to be in testing. It is like something focusing more on process than the product. The things which are being done need to be directly associated with a value.

  • The fourth fatal flow is that almost always the users find problems after releasing the products, even though there are lots of tests had been carried out before releasing.

In here what needs to be considered is that ensuring quality is every one's responsibility. It is not confined to people who are assigned for testing.

And the other surprising facts are related to the future of employees at Google, which will probably rewind the future of SETs TEs and Test Directors/Managers in other IT related organizations in the near future.

It is hoped that there will be no future SETs. How is it going to be managed?

They can be considered as software engineers as a whole. So what happens ultimately is that the 'testing' task is equally distributed among all the developers of the company. Then the development and testing both is done at the developers side. Ultimately all can work as one team.  



TE's tasks will be changed in the future. How is their life cycle being replaced?


In the future TE's job will be something different with respect to the present job of them. In future dogfooders, early adopers and crowd testers etc. will be involved in testing and they will be giving feedback about the system. Then what TEs will have to do is seeing and evaluating whether all these things(feed backs etc) covered the project in testing aspect. He will be involving in calculating the risk impacts, adjusting the test activities like things rather than involved in test creation and execution types of things. So ultimately they will become as specialists or managers for testing.


What happen to Test Directors and Managers in the future?

There will be fewer Test Directors and Managers involved with Google in the near future.   


So ultimately these things will be very surprising without the explanations for them. 









Different types of tests done in the 'Clam Antivirus Project'

Compatibility Testing

Why compatibility tests?

Compatibility test makes sure that the software applications and hardware devices function correctly with all relevant operating systems, platforms, architectures  and with computing environments.

1. A compatibility test is a kind of  Environment Test.
2. 'Buildbot' is a good tool which can be used for compatibility testing.

For antivirus software it is needed to do compatibility testing for both software and database as they are expected to be reliably run on different computer platforms. Therefore compatibility test is an unavoidable test for anti-virus software.   

Performance Testing

In simple terms, performance testing is a type of testing which is intended to determine the responsiveness, throughput, reliability and scalability (if necessary) of a system under a given workload.

Antivirus software need to handle hundreds of thousands of virus signatures and hundreds of file formats, at the same time they must perform all the tasks fast enough to, not to fail the computer system. Therefore performance testing is a must for antivirus software. 

Profilers’ are some special tools designed for; code execution performance analysis. It is said that even with profilers, it’s not an easy task to identify parts of the code that can result in slowdowns. One of the reasons for this is; that a code with problems can be hidden in routines that are not called frequently.

Stress testing can be done as a part of the performance testing. Stress testing helps to discover potential problems with stability, robustness, and general efficiency of the software.

User Acceptance Testing


User Acceptance Testing (UAT) is performed by the users of the system to certify the system with respect to the requirements that was agreed upon or it to certify that it is what they wanted.

The usability and user acceptance testing is the last step before giving out the final product.

Advantages of UAT

1. It is able to provide many useful suggestions to the system being developed
2. Can help to verify companie’s development ideas
3. The people who contribute help the project as the best black box testers. 

Methods of User Acceptance Testing

1. Opinion polling 

 Discussions can be held and opinions can be taken from the public about the major changes, so that the complaints can be reduced.

2. Candidate releasing

As a good practice, the releasing of candidates is not done for mission critical   applications. Otherwise the alpha or beta releases of the product can be thrown to use.