Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Web Application Testing: Methods, Techniques, and Tools, Schemes and Mind Maps of Web Design and Development

Web Engineering and Techniques for Student and Staff Members

Typology: Schemes and Mind Maps

2020/2021

Uploaded on 02/08/2023

sudharson_kumar
sudharson_kumar 🇬🇧

6 documents

1 / 26

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
TESTING WEB APPLICATIONS
4.1 INTRODUCTION:
Web application creates a new challenge to quality assurance and testing. Web
applications consist of various software components possibly supplied by different
manufacturers. The quality of a Web application is essentially determined by the quality of
each software component involved and the quality of their interrelations. Testing is one of the
most important instruments in the development of Web applications to achieve high-quality
products that meet users’ expectations. Methodical and systematic testing of Web applications
is an important measure, which should be given special importance with in quality assurance.
It is a measure aimed at finding errors and limitation in the software under test, while
observing economic, temporal, and technical constraints. Many methods and techniques to
test software systems are currently available. However, they cannot be directly applied to Web
applications, which means that they have to be thought over and perhaps adapted and
enhanced.
Testing Web applications goes beyond the testing of traditional software systems.
Though similar requirements apply to the technical correctness of an application, the use of a
Web application by heterogeneous user groups on a large number of platforms leads to special
testing requirements. It is often hard to predict the future number of users for a Web
application. Response times are among the conclusive success factors in the Internet, and have
to be tested early, despite the fact that the production-grade hardware is generally available
only much later. Other important factors for the success of a Web application, e.g., usability,
availability, browser compatibility, security, actuality, and efficiency, also have to be taken
into account in early tests.
4
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a

Partial preview of the text

Download Web Application Testing: Methods, Techniques, and Tools and more Schemes and Mind Maps Web Design and Development in PDF only on Docsity!

TESTING WEB APPLICATIONS

4.1 INTRODUCTION:

Web application creates a new challenge to quality assurance and testing. Web applications consist of various software components possibly supplied by different manufacturers. The quality of a Web application is essentially determined by the quality of each software component involved and the quality of their interrelations. Testing is one of the most important instruments in the development of Web applications to achieve high-quality products that meet users’ expectations. Methodical and systematic testing of Web applications is an important measure, which should be given special importance with in quality assurance. It is a measure aimed at finding errors and limitation in the software under test, while observing economic, temporal, and technical constraints. Many methods and techniques to test software systems are currently available. However, they cannot be directly applied to Web applications, which means that they have to be thought over and perhaps adapted and enhanced.

Testing Web applications goes beyond the testing of traditional software systems. Though similar requirements apply to the technical correctness of an application, the use of a Web application by heterogeneous user groups on a large number of platforms leads to special testing requirements. It is often hard to predict the future number of users for a Web application. Response times are among the conclusive success factors in the Internet, and have to be tested early, despite the fact that the production-grade hardware is generally available only much later. Other important factors for the success of a Web application, e.g., usability, availability, browser compatibility, security, actuality, and efficiency, also have to be taken into account in early tests.

4.2 Testing Web Applications

4.2 FUNDAMENTALS:

4.2.1 Terminology

Testing is an activity conducted to evaluate the quality of a product and to improve it by identifying defects and problems. If we run a program with the intent to find errors, then we talk about testing. Figure 4-1 shows that testing is part of analytical quality assurance measures. By discovering existing errors, the quality state of the program under test is determined, creating a basis for quality improvement, most simply by removing the errors found.

Figure 4-1 Structuring software quality assurance.

We say that an error is present if the actual result from a test run does not fulfill with the expected result. The expected result is specified. This means that each deviation from the requirements definition is an error; more generally speaking, an error is “the difference between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition”. This definition implies that the requirements definition used as a basis for testing is complete and available before implementation and test. A common phenomenon in the development of Web applications is that the requirements are often incomplete, fuzzy, and subject to frequent changes. Typically, there is an initial vision of the basic functionality. This vision is implemented for the initial release. As a result, the initial development lifecycle is followed by smaller cycles of functionality additions. Agile approaches focus on this iterative and evolutionary nature of the development lifecycle without an extensive written requirements definition. Consequently, the goals, concerns, and expectations of the stakeholders have to form the basis for testing. This means that, for example, each deviation from the value typically expected by users is also considered an error.

Now, different stakeholders generally have different expectations, and some of these expectations may even be competing and fuzzy. For this reason, stakeholder expectations won’t be a useful guideline to decide whether a result is erroneous unless agreement on a set of expectations has been reached and made available in testable form. To support the tester in gaining insight into the users’ world and to better understand users’ expectations, the tester

4.4 Testing Web Applications

information about problems and the status of the application is acquired. Unsuccessful tests, i.e., tests that do not find errors, are “a waste of time”. This is particularly true in Web application development, where testing is necessarily limited to a minimum due to restricted resources and the extreme time pressure under which Web applications are developed.

This situation also requires that serious errors should be discovered as early as possible to avoid unnecessary investments as the cost of finding and removing errors increases dramatically with each development phase. Errors that happened in early development phases are hard to localize in later phases, and their removal normally causes extensive changes and the need to deal with consequential errors. Therefore, we have to start testing as early as possible at the beginning of a project. In addition, short time-to-market cycles lead to situations where “time has to be made up for” in the test phase to compensate for delays incurred in the course of the project. Testing effectiveness and the efficiency of tests are extremely important. In summary, we can say that testing in general, and for Web projects in particular, has to detect as many errors as possible, ideally as many serious errors as possible, at the lowest cost possible, within as short a period of time as possible, and as early as possible.

4.2.4 Test Levels

According to the distinct development phases in which we can produce testable results, we identify test levels to facilitate testing of these results.

Unit tests: test the smallest testable units (classes, Web pages, etc.), independently of one another. Unit testing is done by the developer during implementation.

Integration tests: evaluate the interaction between distinct and separately tested units once they have been integrated. Integration tests are performed by a tester, a developer, or both jointly.

System tests: test the complete, integrated system. System tests are typically performed by a specialized test team.

Acceptance tests: evaluate the system in cooperation with or under the auspice of the client in an environment that comes closest to the production environment. Acceptance tests use real conditions and real data.

Beta tests: let friendly users work with early versions of a product with the goal to provide early feedback. Beta tests are informal tests (without test plans and test cases) which rely on the number and creativity of potential users.

As development progresses, one proceeds from a verification against the technical specification as in unit tests, integration tests and system tests to a validation against user expectations as in acceptance tests and beta tests. An inherent risk when performing the test levels sequentially according to the project’s phases is that errors due to misunderstood user expectations may be found only at a late stage, which makes their removal very costly. To minimize this risk, testing has to be an integrated part of the product construction which

Web Engineering 4.

should encompass the whole development process. Hence, quality-assurance measures like reviews or prototyping are used even before running unit tests. A strongly iterative and evolutionary development process reduces this risk since smaller system parts are frequently tested on all test levels (including those with validation against user expectations), so that errors can be found before they can have an impact on other parts of the system. This means that the sequence of test levels described above does not always dictate the temporal sequence for Web project testing but may be performed several times.

4.2.5 Role of the Tester

The intention to find as many errors as possible requires testers to have a “destructive” attitude towards testing. In contrast, such an attitude is normally difficult for a developer to have towards his or her own piece of software, the more so as he or she normally doesn’t have sufficient distance to his or her own work after the “constructive” development and problem- solving activity. The same perspective often makes developers inclined to the same faults and misunderstandings during testing that have led to errors during the implementation in the first place. For this reason, suggests that developers shouldn’t test their own products. In Web projects, we have an increased focus on unit tests which are naturally written by the developers. While this is a violation of Myers’ suggestion, additional tests are typically performed by someone different from the original developer (e.g. by functional testers recruited from the client’s business departments).

Since quality is always a team issue, a strict separation of testing and development is not advisable and has an inherent risk to hinder the close cooperation between developers and testers. After all, the objective pursued to detect errors is that errors will be removed by the developers. To this end, a clearly regulated, positive communication basis and mutual understanding are prerequisites. This means for the tester: “The best tester isn’t the one who finds the most bugs or who embarrasses the most programmers. The best tester is the one who gets the most bugs fixed”. Since Web project teams are normally multidisciplinary, and the team cooperation is usually of short duration, it can be difficult for team members to establish the necessary trust for close collaboration between developers and testers.

4.3 Test Specifics in Web Engineering

The basics explained in the previous section apply both to conventional software testing and Web application testing. What makes Web application testing different from conventional software testing? The following points outline the most important specifics and challenges in Web application testing based on the application’s characteristics.

 Errors in the “content” can often be found only by costly manual or organizational measures, e.g., by proofreading. Simple forms of automated checks (e.g., by a spell checker) are a valuable aid but are restricted to a limited range of potential defects. Meta-information about the content’s structuring and semantics or a reference system that supplies comparative values are often a prerequisite to be able to perform in-depth tests. If these prerequisites are not available, other approaches have to be found. For example, if frequently changing data about the snow situation in a tourist information

Web Engineering 4.

testing – too much testing can be just as counterproductive as too little. Testers are often tempted to test everything completely, especially at the beginning.

 Web applications consist of a number of different software components (e.g., Web servers, databases, middleware) and integrated systems (e.g., ERP systems, content management systems), which are frequently supplied by different vendors, and implemented with different technologies. These components form the technical infrastructure of the Web application. The quality of a Web application is essentially determined by the quality of all the single software components and the quality of the interfaces between them. This means that, in addition to the components developed in a project, we will have to test software components provided by third parties, and the integration and configuration of these components. Many errors in Web applications result from the “immaturity” of single software components, “incompatibility” between software components, or faulty configuration of correct software components.

 The “immaturity” of many test methods and tools represents additional challenges for the tester. If a Web application is implemented with a new technology, then there are often no suitable test methods and tools yet. Or if initial test tools become available, most of them are immature, faulty, and difficult to use.

 The “dominance of change” makes Web application testing more complex than conventional software testing. User requirements and expectations, platforms, operating systems, Internet technologies and configurations, business models and customer expectations, development and testing budgets are subject to frequent changes throughout the lifecycle of a Web application. Adapting to new or changed requirements is difficult because existing functionality must be retested whenever a change is made. This means that one single piece of functionality has to be tested many times, speaking heavily in favor of automated and repeatable tests. This places particular emphasis on regression tests, which verify that everything that has worked still works after a change. Upgrades and migrations of Web applications caused by ever-changing platforms, operating systems or hardware should first run and prove successful in the test environment to ensure that there will be no unexpected problems in the production environment. A second attempt and an ordered relapse should be prepared and included in the migration plan – and all of this in the small-time window that remains for system maintenance, in addition to 24x7 operation (“availability”).

4.4 Test Approaches

Agile approaches have increasingly been used in Web projects. While agile approaches focus on collaboration, conventional approaches focus on planning and project management. Depending on the characteristics of a Web project, it may be necessary to perform test activities from agile and conventional approaches during the course of the project.

4.8 Testing Web Applications

4.4.1 Conventional Approaches

From the perspective of a conventional approach, testing activities in a project include planning, preparing, performing, and reporting:

Planning: The planning step defines the quality goals, the general testing strategy, the test plans for all test levels, the metrics and measuring methods, and the test environment.

Preparing: This step involves selecting the testing techniques and tools and specifying the test cases (including the test data).

Performing: This step prepares the test infrastructure, runs the test cases, and then documents and evaluates the results.

Reporting: This final step summarizes the test results and produces the test reports. On the one hand, conventional approaches define work results (e.g., quality plan, test strategy, test plans, test cases, test measurements, test environment, test reports) and roles (e.g., test manager, test consultant, test specialist, tool specialist) as well as detailed steps to create the work results (e.g., analyze available test data or prepare/supply test data). Agile approaches, on the other hand, define the quality goal and then rely on the team to self-organize to create software that meets (or exceeds) the quality goal.

Figure 4-2 Critical path of activities.

Due to the short time-to-market cycles under which Web applications are developed, it is typical to select only the most important work results, to pool roles, and to remove unnecessary work steps. It is also often the case that “time has to be made up for” in the test phase to compensate for delays incurred in the course of the project. Therefore, test activities should be started as early as possible to shorten the critical path – the sequence of activities determining the project duration to delivery. For example, planning and design activities can be completed before development begins, and work results can be verified statically as soon as they become available. Figure 4-2 shows that this helps shorten the time to delivery, which compiles nicely with the short development cycles of Web applications.

4.10 Testing Web Applications

systematic, comprehensive, and risk-aware testing approach. In the form introduced here, the scheme can be used to visualize the aspects involved in testing, structure all tests, and serve as a communication vehicle for the team.

4.5.1 Three Test Dimensions

Every test has a defined goal, e.g., to check the correctness of an algorithm, to reveal security violations in a transaction, or to find style mismatches in a graphical representation. The goals are described by the required quality characteristics on the one hand – e.g., correctness, security, compatibility – and by the test objects on the other hand – e.g., algorithms, transactions, representations. Thus, quality characteristics and test objects are mutually orthogonal. They can be seen on two separate dimensions whereby the first dimension focuses on quality characteristics relevant for the system under test, and the second and orthogonal way to view testing is to focus on the features of the system under test. This viewpoint implies that the test objects are executed and analyzed during test runs, while the quality characteristics determine the objectives of the tests.

Both dimensions are needed to specify a test and can be used to organize a set of related tests. For a systematic testing approach, it is useful to distinguish between these two dimensions so it will be possible to identify all the test objects affecting a certain quality characteristic or, vice versa, all the quality characteristics affecting a certain test object. This is important since not all quality characteristics are equally – or at all – relevant for all test objects. For example, a user of an online shop should be free to look around and browse through the product offer, without being bothered by security precautions such as authentication or encryption, unless the user is going to purchase an item. Hence, while the quality characteristic “security” plays a subordinate role for the browsing functionality of the shop, it is of major importance for payment transactions.

Distinguishing between these two dimensions allows us to include the relevance of different quality characteristics for each single test object. In addition, a third dimension specifies when or in what phase of the software lifecycle a combination of test object and quality characteristic should be tested. This dimension is necessary to describe the timeframe within which the testing activities take place: from early phases such as requirements definition over design, implementation, and installation to operation and maintenance. As a result, testing can profit from valuable synergies when taking the activities over the whole lifecycle into account, e.g., by designing system tests that can be reused for regression testing or system monitoring.

Web Engineering 4.

Figure 4-3 Test scheme for Web applications

Furthermore, the time dimension helps to establish a general view of testing over all phases and allows a better understanding of what effects quality characteristics and test objects over time. It makes it easier to justify investments in testing in early phases since the possible payoff in later phases becomes clear. If we join these three dimensions – quality characteristics, test objects, and phases–the result can be visualized as a three-dimensional cube as shown in Figure 4-3. The cube contains all tests as nodes at the intersection of a specific quality characteristic, test object, and phase. The figure shows a possible structuring for the three dimensions, as suggested for Web application testing.

4.5.2 Applying the Scheme to Web Applications

This section describes how the dimensions of the generic scheme introduced in the previous section can be structured to accommodate the special characteristics of Web applications and Web projects. In practice the structuring depends on the requirements of the system under test. Therefore, it is necessary to customize and detail the generic scheme according to the specific situation of the project.

Quality Characteristics

The quality characteristics dimension is determined by the quality characteristics that are relevant for the Web application under test. Thus, the quality characteristics relevant for testing originate in the objectives and expectations of the stakeholders and should have been described as nonfunctional requirements in the requirements definition. Additional information from other quality assurance measures and testing experience (e.g., typical risk

Web Engineering 4.

compatibility tests to assure that users can access a Web application with any Web browser as soon as a new version becomes available, although no changes were made to the Web application itself.

4.6 Test Methods and Techniques

When testing Web applications, we can basically apply all methods and techniques commonly used in traditional software testing. To take the specifics of Web applications into account, some of these test methods and techniques will have to be thought over, or adapted and expanded. In addition, we will most likely need new test methods and techniques to cover all those characteristics that have no correspondence in traditional software testing.

The summary shown in Table 4-1 corresponds to the test scheme and is structured by the test objects dimension and the quality characteristics dimension. The table gives an exemplary overview of the methods, techniques, and tool classes for Web application testing described in the literature. It shows typical representatives of test methods and techniques as a basis for arranging a corporate or project-specific method and tool box.

4.14 Testing Web Applications

Table 4-1 Methods, techniques, and tool classes for Web Application testing

The following subsections briefly describe typical methods and techniques for Web application testing.

4.6.1 Link Testing

Links within a hypertext navigation structure that point to a non-existing node (pages, images, etc.) or anchor are called broken links and represent well-known and frequently

4.16 Testing Web Applications

4.6.3 Usability Testing

Usability testing evaluates the ease-of-use issues of different Web designs, overall layout, and navigations of a Web application by a set of representative users. The focus is on the appearance and usability. A formal usability test is usually conducted in a laboratory setting, using workrooms fitted with one-way glass, video cameras, and a recording station. Both quantitative and qualitative data are gathered. The second type of usability evaluation is a heuristic review. A heuristic review involves one or more human-interface specialists applying a set of guidelines to gauge the solution’s usability, pinpoint areas for remediation, and provide recommendations for design change. This systematic evaluation employs usability principles that should be followed by all user interface designers such as error prevention, provision of feedback and consistency, etc. In the context of usability testing the issue of making the Web accessible for users with disabilities has to be treated. Accessibility means that people with disabilities (e.g., visual, auditory, or cognitive) can perceive, understand, navigate, and interact with the Web. The Web Accessibility Initiative (WAI) of the W3C has developed approaches for evaluating Web sites for accessibility, which are also relevant for testing Web applications. In addition to evaluation guidelines the W3C provides a validation service to be used in combination with manual and user testing of accessibility features.

4.6.4 Load, Stress, and Continuous Testing

Load tests, stress tests, and continuous testing are based on similar procedures. Several requests are sent to the Web application under test concurrently by simulated users to measure response times and throughput. The requests used in these tests are generated by one or several “load generators”. A control application distributes the test scripts across the load generators; it also synchronizes the test run, and collects the test results. However, load tests, stress tests, and continuous testing have different test objectives:

 A load test verifies whether or not the system meets the required response times and the required throughput. To this end, we first determine load profiles (what access types, how many visits per day, at what peak times, how many visits per session, how many transactions per session, etc.) and the transaction mix (which functions shall be executed with which percentage). Next, we determine the target values for response times and throughput (in normal operation and at peak times, for simple or complex accesses, with minimum, maximum, and average values). Subsequently, we run the tests, generating the workload with the transaction mix defined in the load profile, and measure the response times and the throughput. The results are evaluated, and potential bottlenecks are identified.

 A stress test verifies whether or not the system reacts in a controlled way in “stress situations”. Stress situations are simulated by applying extreme conditions, such as unrealistic overload, or heavily fluctuating load. The test is aimed at finding out whether or not the system reaches the required response times and the required throughput under stress at any given time, and whether it responds appropriately by generating an error message (e.g., by rejecting all further requests as soon as a pre-

Web Engineering 4.

defined “flooding threshold” is reached). The application should not crash under stress due to additional requests. Once a stress situation is over, the system should recover as fast as possible and reassume normal behavior.

 Continuous testing means that the system is exercised over a lengthy period of time to discover “insidious” errors. Problems in resource management such as unreleased database connections or “memory leaks” are a typical example. They occur when an operation allocates resources (e.g., main memory, file handles, or database connections) but doesn’t release them when it ends. If we call the faulty operation in a “normal” test a few times, we won’t detect the error. Only continuous testing can ensure that the operation is executed repeatedly over long periods of time to eventually reproduce the resource bottleneck caused by this error, e.g., running out of memory.

4.6.5 Testing Security

Probably the most critical criterion for a Web application is that of security. The need to regulate access to information, to verify user identities, and to encrypt confidential information is of paramount importance. Security testing is a wide field, and will be discussed in this section only briefly; it does not represent a testing technique in the literal sense. It concerns issues in relation to the quality characteristic “security”:

 Confidentiality: Who may access which data? Who may modify and delete data?

 Authorization: How and where are access rights managed? Are data encrypted at all? How are data encrypted?

 Authentication: How do users or servers authenticate themselves?

 Accountability: How are accesses logged?

 Integrity: How is information protected from being changed during transmission?

When testing in the field of security, it is important to proceed according to a systematic test scheme. All functions have to be tested with regard to the security quality characteristic, i.e., we have to test each function as to whether or not it meets each of the requirements listed above. Testing security mechanisms (e.g., encryption) for correctness only is not sufficient. Despite a correctly implemented encryption algorithm, a search function, for example, could display confidential data on the result page. This is an error that test runs should detect, too. Typically, security testing must not only find defects due to intended but incomplete or incorrect functionality but also due to additional yet unwanted behavior that may have unforeseen side-effects or even contains malicious code. Unwanted, additional behavior is often exposed by passing input data unexpectedly to an application, e.g., by circumventing client-side input validation.

4.6.6 Test-driven Development

Test-driven development emerged from the test-first approach used in Extreme Programming, but it does not necessarily dictate an agile project approach. This means that

Web Engineering 4.

prerequisite for test-driven development, as developers run the tests for every bit of code, they implement to successively grow the application.

 Also, the ability to quickly rerun an automated set of tests can help to shorten test execution time and to reduce the time-to-market when the bottleneck is repeating existing tests.

However, despite the potential efficiency gain that automated tests may provide, expectations about test automation are often unrealistically high. Test automation does not improve the effectiveness of testing (i.e. the total number of defects detected). Automating a test does not make it any more effective than running the same test manually. Usually, manual tests find even more defects than automated tests since it is most likely to find a defect the first time a test is run. If a test has been passed once, it is unlikely that a new defect will be detected when the same test is run again, unless the tested code is affected by a change. Furthermore, if testing is poorly organized, with ineffective tests that have a low capability of finding defects, automating these tests does not provide any benefits. Rather, the automation of a chaotic testing process only results in more and faster chaos.

Test automation is a significant investment. Although tools provide convenient features to automate testing, there is still a considerable amount of effort involved in planning, preparing, performing, and reporting on automated tests. And there is still a considerable amount of overhead involved in running the tests, including the deployment of the tests, the verification of the results, the handling of false alarms, and the maintenance of the test execution infrastructure. Automated tests have to be maintained too, as tests become obsolete or break because of changes that concern the user interface, output formats, APIs or protocols. In addition, the total cost of ownership of test tools involves not only the license fees but also additional costs such as training or dealing with technical problems since test tools are typically large and complex products. The costs usually exceed the potential savings from faster and cheaper (automated) test execution. Thus, while it is sometimes argued that test automation pays off due to the reduced test execution cycles, in Web application testing the main benefit of automation comes from the advantages listed above that lead to improved quality and shorter time-to-market cycles. Even if the costs incurred by test automation may be higher compared with manual testing, the resulting benefits in quality and time call for this investment.

Thus, a sensible investment strategy uses tools to enhance manual testing, but does not aim to replace manual testing with automated testing. Manual tests are best to explore new functionality, driven by creativity, understanding, experience, and the gut feelings of a human tester. Automated tests secure existing functionality, find side-effects and defects that have been re-introduced, and enhance the range and accuracy of manual tests. Therefore, not all testing has to be automated. Partial automation can be very useful and various test tools are available to support the different kinds of testing activities.

4.20 Testing Web Applications

4.7.2 Test Tools

Commonly used test tools support the following tasks:

 Test planning and management: These tools facilitate the management of test cases and test data, the selection of suitable test cases, and the collection of test results and bug tracking.

 Test case design: Tools available to design test cases support the developer in deriving testcases from the requirements definition or in generating test data.

 Static and dynamic analyses: Tools available to analyze Web applications, e.g., HTML validators or link checkers, try to discover deviations from standards.

 Automating test runs: Tools can automate test runs by simulating or logging as well as capturing and replaying the behavior of components or users.

 System monitoring: Tools available to monitor systems support us in detecting errors, e.g., by capturing system properties, such as memory consumption or database access.

 General tasks: Tools like editors or report generators are helpful and mentioned here for the sake of completeness.

4.7.3 Selecting Test Tools

The current trend in test tools for Web applications is closely coupled with the continual evolution of Web technologies and modern development processes. A large number of different tools are available today. When selecting suitable tools for Web application testing, we always need to research and re-evaluate things. The test scheme introduced in this chapter can support us in selecting tools and building a well-structured and complete tool box.