Book Image

Apache JMeter

By : Emily H. Halili
Book Image

Apache JMeter

By: Emily H. Halili

Overview of this book

<p>A bad response time on a website can drive away visitors and prospective customers. To measure what a website can handle, there should be a way to simulate and analyze different load scenarios&acirc;&euro;&rdquo;this is where a load-testing tool like JMeter comes in. JMeter is a powerful desktop performance tool from the Apache Jakarta project, written in Java, for load-testing web pages, web applications, and other static and dynamic resources including databases, files, Servlets, Perl scripts, Java Objects, FTP Servers, and more.<br /><br />JMeter works by acting as the "client side" of an application, and measures response time. As such, it's one half of the testing arsenal; the other half consists of a tool to watch metrics on the server side, such as thread counts, CPU loads, resource usage, and memory usage. Although it can't behave like a browser to measure rich client-side logic such as JavaScripts or Applets, JMeter certainly measures the performance of the target server from the client's point of view. JMeter is able to capture test results that help you make informed decisions and benchmark your application.<br /><br />This book introduces you to JMeter (version 2.3) and test automation, providing a step-by-step guide to testing with JMeter. You will learn how to measure the performance of a website using JMeter.<br /><br />While it discusses test automation generally, the bulk of this book gives specific, vivid, and easy-to-understand walkthroughs of JMeter's testing tools showing what they can do, and when and how to use them.</p>
Table of Contents (16 chapters)

Appendix C. Glossary

The terms which appear in the following appendix are adapted from "Standard glossary of terms used in Software Testing", Version 2.0 (dd. December, 2nd 2007), Produced by the 'Glossary Working Party'—International Software Testing Qualifications Board. Only those terms related to test automation are included here.

actual result: The behavior produced/observed when a component or system is tested.

ad hoc testing: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity.

automated testware: Testware used in automated testing, such as tool scripts.

availability: The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage.

basis test set: A set of test cases derived from the internal structure of a component or specification to ensure that 100% of a specified coverage criterion will be achieved.

behavior: The response of a component or system to a set of input values and preconditions.

benchmark test: (1) A standard against which measurements or comparisons can be made. (2) A test that is being used to compare components or systems to each other or to a standard as in (1).

boundary value : An input value or output value that is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

boundary value analysis: A black-box test design technique, in which test cases are designed based on boundary values.

boundary value coverage: The percentage of boundary values that have been exercised by a test suite.

branch: A basic block that can be selected for execution, based on a program construct in which one of two or more alternative program paths is available, e.g. case, jump, go to, if-then-else.

business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

capture/playback/replay tool: A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.

CAST: Acronym for Computer Aided Software Testing.

cause-effect graph: A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

cause-effect graphing: A black-box test design technique in which test cases are designed from cause-effect graphs.

changeability: The capability of the software product to enable specified modifications to be implemented.

component: A minimal software item that can be tested in isolation.

component integration testing: Testing performed to expose defects in the interfaces and interaction between integrated components.

component specification: A description of a component's function in terms of its output values for specified input values under specified conditions, and required non-functional behavior (e.g. resource-utilization).

component testing: The testing of individual software components.

concurrency testing: Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system.

condition: A logical expression that can be evaluated as True or False, e.g. A>B. See also test condition.

condition coverage: The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.

condition determination coverage: The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% condition determination coverage implies 100% decision condition coverage.

condition determination testing: A white-box test design technique in which test cases are designed to execute single condition outcomes that independently affect a decision outcome.

condition outcome: The evaluation of a condition to True or False.

condition testing: A white-box test design technique in which test cases are designed to execute condition outcomes.

cost of quality: The total costs incurred on quality activities and issues, and often split into prevention costs, appraisal costs, internal failure costs, and external failure costs.

data-driven testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools.

database integrity testing: Testing the methods and processes used to access and manage the data(base), to ensure access methods, processes, and data rules function as expected and that during access to the database, data is not corrupted or unexpectedly deleted, updated, or created.

defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

defect-based test design technique: A procedure to derive and/or select test cases targeted at one or more defect categories, with tests being developed from what is known about the specific defect category.

development testing: Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers.

domain: The set from which valid input and/or output values can be selected.

dynamic comparison: Comparison of actual and expected results, performed while the software is being executed, for example by a test execution tool.

dynamic testing: Testing that involves the execution of the software of a component or system.

efficiency: The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions.

efficiency testing: The process of testing to determine the efficiency of a software product.

equivalence partition/class: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.

equivalence-partition coverage: The percentage of equivalence partitions that have been exercised by a test suite.

exhaustive testing: A test approach in which the test suite comprises all combinations of input values and preconditions.

expected result: The behavior predicted by the specification, or another source, of the component or system under specified conditions.

exploratory testing: An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.

fail: A test is deemed to fail if its actual result does not match its expected result.

failure: Deviation of the component or system from its expected delivery, service, or result.

failure rate: The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs.

functional testing: Testing based on an analysis of the specification of the functionality of a component or system.

functionality testing: The process of testing to determine the functionality of a software product.

keyword-driven testing: A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.

latency (client): Client latency is the time that it takes for a request to reach a server and for the response to travel back (from server to client). Includes network latency and server latency.

latency (network): Network latency is the additional time that it takes for a request (from a client) and a response (from a server) to cross a network until it reaches the intended destination.

latency (server): Server latency is the time the server takes to complete the execution of a request normally made by a client machine.

load profile: A specification of the activity that a component or system being tested may experience in production. A load profile consists of a designated number of virtual users who process a defined set of transactions in a specified time period and according to a predefined operational profile.

load testing: A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.

master test plan: A test plan that typically addresses multiple test levels.

metrics: Metrics are the actual measurements obtained by running performance tests. These performance tests include system-related metrics such as CPU, memory, disk I/O, network I/O, and resource utilization levels. The performance tests also include application-specific metrics such as performance counters and timing data.

monitoring tool: A software tool or hardware device that runs concurrently with the component or system under test and supervises, records and/or analyzes the behavior of the component or system.

pass: A test is deemed to pass if its actual result matches its expected result.

pass/fail criteria: Decision rules used to determine whether a test item (function) or feature has passed or failed a test.

performance: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.

performance indicator: A high-level metric of effectiveness and/or efficiency used to guide and control progressive development, e.g. lead-time slip for software development.

performance profiling: Definition of user profiles in performance, load and/or stress testing. Profiles should reflect anticipated or actual usage based on an operational profile of a component or system, and hence the expected workload.

performance budgets: Performance budgets are your constraints. Performance budgets specify the amount of resources that you can use for specific scenarios and operations and still be successful.

performance testing: The process of testing to determine the performance of a software product.

performance testing tool: A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.

record/playback tool: See capture/playback tool.

recorder/scribe: The person who records each defect that is mentioned and any suggestions for process improvement during a review meeting, on a logging form. The recorder/scribe has to ensure that the logging form is readable and understandable.

regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

stress testing: A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads, or with reduced availability of resources such as access to memory or servers.

stress testing tool: A tool that supports stress testing.

test: A set of one or more test cases.

test approach: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria, and test types to be performed.

test automation: The use of software to perform or support test activities, e.g. test management, test design, test execution, and results checking.

test case: A set of input values, execution preconditions, expected results, and execution post conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

test case design technique: Procedure used to derive and/or select test cases.

test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.

test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.

test cycle: Execution of the test process against a single identifiable release of the test object.

test data: Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.

test data preparation tool: A type of test tool that enables data to be selected from existing databases or created, generated, manipulated, and edited for use in testing.

test design: (1) See test design specification. (2) The process of transforming general testing objectives into tangible test conditions and test cases.

test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach, and identifying the associated high-level test cases. [After IEEE 829]

test environment: An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.

test execution: The process of running a test on the component or system under test, producing actual result(s).

test execution automation: The use of software, e.g. capture/playback tools, to control the execution of tests, the comparison of actual results to expected results, the setting up of test preconditions, and other test control and reporting functions.

test execution tool: A type of test tool that is able to execute other software using an automated test script, e.g. capture/playback.

test generator: See test data preparation tool.

test harness: A test environment comprising stubs and drivers needed to execute a test.

test plan: A document describing the scope, approach, resources, and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, the degree of tester independence, the test environment, the test design techniques, entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.

test run: Execution of a test on a specific version of the test object.

test script: Commonly used to refer to a test procedure specification, especially an automated one.

test set: See test suite

test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

tester: A skilled professional who is involved in the testing of a component or system.

testing: The process consisting of all life-cycle activities, both static and dynamic, concerned with planning, preparation, and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose, and to detect defects.

time behavior: See performance.

volume testing: Testing where the system is subjected to large volumes of data.

For more terms, visit: http://www.istqb.org/downloads/glossary-1.0.pdf.