Monday, October 21, 2013

Mobility Testing: Challenges in Testing Mobile applications

In Today’s world more and more applications are available on the mobile.   The mobile applications are business critical and many of them help the mobile user to perform financial transaction on the Internet using mobile.   In such circumstances, it is imperative that the mobile application is stable and bug free.

However, testing mobile application is very challenging.   One has to work with different constraints.   There is a large variety of mobile devices on which the application has to be tested.  Moreover, the operating system versions also vary.   The application also has to be tested with different possible network carriers.   Testing a mobile application on different mobile phones with different operating system versions using multiple carriers is a big challenge.   This challenge is circumvented to some extent by using device emulators.   Use of device emulator helps in saving the cost of procuring mobile device, but due to differences in computing resources and network environment, we cannot rely too much on simulator testing. It is recommended to perform a combination of actual device testing and simulator testing.


Another option available is to use mobile testing platform services like deviceanywhere.   This mobile platform provide on demand service for testing mobile applications on pay per use model.   This helps in managing infrastructure cost for testing mobile applications.   Moreover, the mobile devices can be accessed anywhere in the world.   You can get more details from http://www.keynotedeviceanywhere.com/

Monday, April 8, 2013

Agile Model: The advent of friendly tester

Testers and developers were known to lock horns in the conventional development model.   The developer would proudly defend his code, while the tester would proudly showcase the bug.   This behavior had to do little with the persona of the individuals (i.e. tester and developer).   They were expected to behave in this manner.  It led to a healthy competition, where the developer will try to write bug free code and challenge the tester to find scenario’s that can break the code.   This may appear as a conflict between the tester and the developer, but this led to development of a piece of software which was of excellent quality.    The coder had tried his best to make it bug free and the tester had tried his best to break the code in all possible scenario’s.

Now, as more and more organizations are following Agile Methodologies, the relationship between Testers and Developers are changing.   Taking the example of Scrum Based Agile methodology, the testers and the Developers are part of one cohesive scrum team.   The Developers writes the code for a particular user story, and the tester starts testing it even before it is formally delivered to QA.   In many instances, the tester tests the code in the developer’s environment (sometimes on Developers machine) and verbally tells the developer about the bugs encountered during this testing.   The developer fixes it and again requests the tester to verify it.   It is difficult to imagine this type of informal and friendly interaction between tester and developer in the conventional model. 

Before the demo, the tester deploys the build in QA environment and does a final round of testing.   All this is done in a friendly manner as the small scrum team (compromising of developers and testers) have a common objective of rolling out all the stories planned for the sprint.   In hardening sprint, the developers even step-in to test the application.  

This friendly and informal interaction works well in Agile model, because in Agile the team and its commitment come first.  The role (tester or developer) is not important.   Long live the friendship between Tester and developer.  Thanks to the Agile model !!

Tuesday, November 1, 2011

What makes a Test Manager to be highly regarded and respected?

The test manager is normally seen as an individual who is constantly complaining about problems/bugs. It is the test manager who informs the management about the pathetic quality of the application or slippage in release date. This makes him the bad guy. This perception is because Test Manager is the carrier of the bad news.

It is important to ensure that Test manager gets over this negative image. This does not mean that he stops reporting bad news or reports incorrect information. What is important is that how does the Test manager report the findings of the test team. The test manager should stick to the following basic rules:

  • Stick to the facts
  • Give recommendations for resolving the challenge
  • Avoid being a custodian of quality
  • Make everyone feel to be equally responsible for quality
  • Do not pass judgement on people or product

This will help in getting respect from the management and client without compromising on the Test findings. Apart from that he will be viewed has a person with positive attitude who is not only finding new issues but also proactively them.

- Aashu Chandra


Wednesday, December 8, 2010

Steps Required For Performance Testing

Many people think that knowledge of scripting in the performance testing tool is good enough to do performance testing. Performance testing involves much more. Given below are the steps for doing a Performance testing project:

  • Performance Test planning – This is the most important stage. In this stage, we identify the performance testing environment and performance testing tool. We also compare performance testing environment with the production environment. Normally, we require admin access to the servers, as we may want to bounce the servers after every performance run.
  • Identify scenario’s and the corresponding load – Based on discussion with the business users, we identify the core transactions which are done most frequently.
  • Prepare performance test data – Create performance test data based on identified test scenario’s.
  • Create scripts for identified scenario’s – Create performance testing scripts using the performance testing tool.
  • Execute performance testing scripts in Dev environment – Test the performance testing scripts by running it in development environment.
  • Execute performance testing scripts in performance testing environment – Execute performance testing scripts in performance testing environment with the required VUsers.
  • Analyze results and identify the bottleneck transactions – Identify the business transactions which are performing slower than expected and then start analyzing the reason for the slow response.

Performance testing is a specialized testing domain, where the testers should not only have the knowledge of the tool but good analytical skill to resolve performance bottlenecks.

- Aashu Chandra

Friday, November 21, 2008

Managing Change in Requirements during testing stage

It has been practically observed that there are changes in the functional requirements, when test cases are being executed. It is recommended that we do an impact analysis of the Impact on testing activities. This impact analysis is different from the Impact analysis which is performed by development.

After performing the Impact analysis, the Test manager should inform all stakeholders about the impact of change in requirements. The impact is in terms of schedule and effort. Even if the changes are minor, which the testing team is ready to absorb (i.e. no change in schedule), still a log of these impact analysis forms should be maintained. It has been practically observed that many small changes also add up into substantial impact.

In a dynamic world, there will be change in business requirements. The Testing manager/lead has to ensure that all stakeholders are informed of the impact on testing effort due to change in requirements. This helps in reducing the pressure on the testing team members.

Wednesday, October 22, 2008

What to do if a bug has leaked into production?

The application which your team had tested was released into production. A day after the release, you get a mid-night call, that a priority 1 issue has been reported in the production environment. What do you do in such a scenario?

First of all do not panic. You need to remain calm and composed. There could be different reasons due to which an issue can occur:
  • Application is not deployed or configured correctly
  • Incorrect data in production database is causing it
  • Due to incorrect understanding by end users, the feature is reported as a bug
  • If none of the above, then it could be a bug in the application.

    Broadly, your response to such a situation can be divided in two broad categories. One is the immediate corrective action which needs to be taken to minimize the impact of this issue, and second is the root cause analysis for identifying that why it happened and how we can ensure that similar issue does not happen again.

    a) Immediate corrective action:
    Try to collect all the required information regarding the issue found in production.

    - In which particular scenario/condition is the problem happening?
    - Is the problem happening consistently in production or is it intermittent?
    - What percentage of end users is getting impacted by this issue?
    - Is this problem corrupting or losing data which is not recoverable? If yes, what can be done to minimize the data loss?
    - Can this issue be replicated in QA environment?

    b) Root cause analysis and fixing the process:
    One the issue is identified, we need take the following steps so that similar issue does not happen again:
    - The QA team tries to simulate the problematic scenario in QA environment. It works closely with Business team, development team and production support team to recreate the scenario.
    - If the resolution of the issue requires a patch release, then start planning to test the patch release.
    - To identify the root cause of the issue, perform causal analysis.
    - Identify corrective actions required so that similar issue does not happen again.
    - Implement corrective action
    - Be proactive and keep all stakeholders informed at each stage.

The above steps will help in making sure that you do not loose the confidence of the stakeholders.

Wednesday, June 4, 2008

Value addition using Freeware automated tools

Many testing consultants are proud of using and recommending expensive licensed automated testing tools which are provided by big software vendors. The IT companies may get impressed with such testing tools provided by the big software vendors. However, in this approach a major chunk of the Testing budget of these IT companies will be exhausted in procuring the expensive testing tool and very little budget would remain for actual automation of the test cases.

As testing consultants, we need to ensure that we recommend a testing strategy, in which the total cost of the automation solution (including the cost of testing tool) is minimal. This can be achieved by evaluating and recommending freeware testing tools. There are several freeware testing tools which are available for functional, performance, security testing etc.

Many testers are sometimes reluctant to use the freeware tools as they feel that there is a risk associated with the freeware tool as it has limited features. A good approach to mitigate this risk is to create a Proof Of Concept (POC) by automating few test cases using the freeware tool. This will mitigate the risk associated with using the freeware tool. The POC should be demonstrated to the stakeholders to get the required approval.

Another observation is that freeware tools have almost the same set of features as licensed tools, but the freeware tools supports limited platforms or protocols. So, depending upon the Application Under Test (AUT) requirement, we should evaluate and select an appropriate freeware testing tool.

By using the above approach, you can make direct saving in the IT budget of your client.