The Grinder is a JavaTM load-testing framework. It is freely available under a BSD-style open-source license.
The Grinder 3 uses the powerful scripting language Jython, and allows any Java code to be tested without the need to write a plug-in. The Grinder 3 allows any Java (or Jython) code to be encapsulated as a test. This practically removes the need to write custom plug-ins.
Basically, Grinder is a tool used for load testing which supports the recording mode.
The Grinder includes the software Jython (a Java Python implementation) for scripting, created by Jim Hugunin, Barry Warsaw and the Jython team. JPython was created in late 1997 by Jim Hugunin.
The most significant change that The Grinder 3 introduces is a Jython scripting engine that is used to execute test scripts. Jython is a Java implementation of the popular Python language. Test scripts specify the tests to run.
The Grinder works on any hardware platform and any operating system that supports J2SE 1.3 and above. It can simulate web browsers and other devices that use HTTP, and HTTPS.
The Grinder is a framework for running test scripts across a number of machines. It is used for generating load by simulating client requests to your application, and for measuring how an application copes with that load.
This document is prepared keeping in mind the users who are new to The Grinder.
The Grinder installation and starting of the Grinder is first discussed and then the features, various controls that Grinder uses to work and their responsibilities, and the importance of each entity within the Grinder controls are then discussed in detail.
It would not be appropriate starting with explaining the functioning of the Grinder without discussing the features and the method of recording and executing a script recorded for a web application.
Here we have included an example to record the script and thereby execute it for a web application. We have used one performance plan to record the scenarios.
2. Grinder Installation:
The latest version of Grinder can be downloaded from http://grinder.sourceforge.net/ (We have used Grinder 3.0 Beta)
The Grinder 3 is distributed as two zip files. Everything required to run The Grinder is in the zip file labeled grinder-3.0-version.zip. The remaining files that are needed to build The Grinder are distributed in the zip file labeled grinder-3.0-version-src.zip; these are mainly of interest to developers wanting to extend The Grinder.
Download the grinder zip file and unzip it on the local hard drive.
Prerequisites:
Java – J2SE 1.4.1 or latest version should be installed.
3. Starting the Grinder:
There are two ways to start the Grinder on windows platform:
3.1 Using Command Prompt:
In order to start Grinder explicitly from the command prompt, we need to set the path for two components. They are
a) The grinder. jar
b) Java home
To set a CLASSPATH & Java path we can go about it this way
Control Panel System Advanced Environment Variables Create a new variable in the System variables section.
Variable = CLASSPATH
Value = Complete path to the grinder directory\lib\grinder.jar
Ex- C:\Grinder\engine\grinder-3.0-beta30\lib\grinder.jar
Variable = JAVA_HOME
Value = full path to java installed directory
Ex- E:\Program Files\Java\jre1.5.0_07\bin
Once the variables are set, use the following commands at the command prompt
a) Java net.grinder.Console : This command starts the Console
Ex- c:\test\> java net.grinder.Console
b) Java net.grinder.Grinder : This command starts the agent process
Ex- c:\test\> java net.grinder.Grinder
The command (b) should be used on all the machines that will run the agent process.
3.2 Using command files:
Create following command files in the directory where grinder is installed
1. setGrinderEnv.cmd:
set GRINDERPATH=(full path to grinder install directory)
set GRINDERPROPERTIES=(full path to grinder.properties)\ grinder.properties
set CLASSPATH=%GRINDERPATH%\lib\grinder.jar;%CLASSPATH%
set JAVA_HOME=(full path to java install directory)
PATH=%JAVA_HOME%\bin;%PATH%
2. startAgent.cmd:
call (path to setGrinderEnv.cmd)\setGrinderEnv.cmd
echo %CLASSPATH%
java -cp %CLASSPATH% net.grinder.Grinder %GRINDERPROPERTIES%
3. startConsole.cmd:
call (path to setGrinderEnv.cmd)\setGrinderEnv.cmd
java -cp %CLASSPATH% net.grinder.Console
4. The Grinder Features:
The Grinder is an easy-to-use Java-based load generation and performance measurement tool that adapts to a wide range of J2EE applications. It has BSD-style licensed and open sourced.
4.1 Capabilities of the Grinder
Load Testing: Load Testing determines if an application can support a specified load (for example, 500 concurrent users) with specified response times. Load Testing is used to create benchmarks.
Capacity Testing: Capacity Testing determines the maximum load that an application can sustain before system failure.
Functional Testing: Functional Testing proves the correct behavior of an application.
Stress Testing: Stress Testing is load testing over an extended period of time. Stress Testing determines if an application can meet specified goals for stability and reliability, under a specified load, for a specified time period.
4.2 Standards
100% Pure Java: The Grinder works on any hardware platform and any operating system that supports J2SE 1.3 and above.
Web Browsers: The Grinder can simulate web browsers and other devices that use HTTP, and HTTPS.
Web services: The Grinder can be used to test Web Service interfaces using protocols such as SOAP and XML-RPC.
Database: The Grinder can be used to test databases using JDBC.
Middleware: The Grinder can be used to test RPC and MOM based systems using protocols such as IIOP, RMI/IIOP, RMI/JRMP, and JMS.
Other Internet protocols: The Grinder can be used to test systems that utilise other protocols such as POP3, SMTP, FTP, and LDAP.
4.3 The Grinder Architecture
Goal: Minimize system resource requirements while maximizing the number of test contexts ("virtual users").
Multi-threaded, multi-process: Each test context runs in its own thread. The threads can be split over many processes depending on the requirements of the test and the capabilities of the load injection machine.
Distributed: The Grinder makes it easy to coordinate and monitor the activity of processes across a network of many load injection machines from a central console.
Scalable: The Grinder typically can support several hundred HTTP test contexts per load injection machine. (The number varies depending on the type of test client). More load injection machines can be added to generate bigger loads.
4.4 Console
The Console is the heart of the Grinder. This is an engine which controls the test runs and give the test results.
The co-ordination of the processes is taken place here on the console engine. The worker processes can be started, stopped or for that reason even reset from one central console. Also the console displays the current worker processes and their status if they are connected.
The Grinder console provides an easy way to control multiple test-client machines, display test results, and control test runs The process monitoring becomes easy with the display of current worker processes and threads along with the results by the console.
Graphical Interface: 100% Java Swing user interface. Process coordination Worker processes can be started, stopped and reset from one central console.
Process coordination: Worker processes can be started, stopped and reset from one central console.
Process monitoring: Dynamic display of current worker processes and threads.
Script editing: Central editing and management of test scripts.(Future)
Figure a
4.5 Statistics, Reports, Charts
Test monitoring: Pre-defined charts for response time, test throughput. Display the number of invocations, test result (pass/fail), average, minimum and maximum values for response time and tests per second for each test.
Data collation: Collates data from worker processes. Data can be saved for import into a spreadsheet or other analysis tool.
Instrument anything: The Grinder records statistics about the number of times each test has been called and the response times achieved. Any part of the test script can be marked as a test.
Statistics engine: Scripts can declare their own statistics and report against them. The values will appear in the console and the data logs. Composite statistics can be specified as expressions involving other statistics.
4.6 Script
Record real users: Scripts can be created by recording actions of a real user using the TCP Proxy. The script can then be customized by hand.
Powerful scripting in Python: Simple to use but powerful, fully object-oriented scripting.
Multiple scenarios: Arbitrary looping and branching allows the simulation of multiple scenarios. Simple scenarios can be composed into more complex scenarios. For example, you might allocate 10% of test contexts to a login scenario, 70% to searching, 10% to browsing, and 10% to buying; or you might have different workloads for specific times of a day.
Access to any Java API: Jython allows any Java-based API to be used directly from the test script.
Parameterization of input data: Input data (e.g. URL parameters, form fields) can be dynamically generated. The source of the data can be anything including flat files, random generation, a database, or previously captured output.
Content Verification: Scripts have full access to test results. In the future, The Grinder will include support for enhanced parsing of common results such as HTML pages.
4.7 The Grinder Plug-ins
HTTP: The Grinder has special support for HTTP that automatically handles cookie and connection management for test contexts.
Custom: Users can write their own plug-ins to a documented interface; although this is rarely necessary due to the powerful scripting facilities.
4.8 HTTP Plug-in
HTTP 1.0, HTTP 1.1: Support for both HTTP 1.0 and HTTP 1.1 is provided.
HTTPS: The Grinder supports HTTP over SSL.
Cookies: Full support for Cookies is provided.
Multi-part forms: The Grinder supports multi-part forms.
4.9 TCP Proxy
TCP proxy: A TCP proxy utility is supplied that can be used to intercept system interaction at the protocol level. It is useful for recording scripts and as a debugging tool.
HTTP Proxy: The TCP proxy can be configured as an HTTP/HTTPS proxy for easy integration with web browsers.
SSL Support: The TCP proxy can simulate SSL sessions.
Filter-based architecture: The TCP proxy has pluggable filter architecture. Users can write their own filters.
5. The Grinder process
The Grinder is a framework for running test scripts across a number of machines. The framework is comprised of three types of process (or program). Worker processes, agent processes, and the console. The responsibilities of each of the process types are:
Worker processes Interpret Jython test scripts and performs tests using a number of worker threads.
Agent processes A single agent process runs on each test-client machine and is responsible for managing the worker processes on that machine.
Console Coordinates the other processes and collates statistics.
Collates and displays statistics
The Grinder allows co-ordination and monitoring of the activity of the processes across a network of many load injection machines from a central console. As The Grinder is written in Java, each of these processes is a Java Virtual Machine (JVM) and can be run on any computer with a suitable version of Java installed.
For heavy duty testing, you start an agent process on each of several client machines. The worker processes they launch can be controlled and monitored using the console. There is little reason to run more than one agent on a single machine, but if you can if you wish.
The Grinder typically can support several hundred HTTP test contexts per load injection machine. More load injection machines can be added to generate bigger loads.
6. The Process Controls:
Worker processes that are configured to receive console signals go through three states:
Initiated (waiting for a console signal)
Running (performing tests, reporting to console)
Finished (waiting for a console signal)
7. Console’s Display:
The tabs available on the Console are
1. Graphs 2. Results
3. Processes 4. Script
These tabs on the Console display information about The Grinder and its tests
7.1 Graphs:
Each graph displays the 7 most recent Tests Per Second (TPS) values for a particular test.
Figure b
The colors of the graphs are based on the relative response time.
Long response times are more red, short response times are more yellow.
7.2 Results:
This tab shows the results from The Grinder instrumentation.
There are a number of instruments which are defined and each of these instrumented results is displayed.
Ex: Test, Mean Time, Successful Tests, Errors, TPS, Peak TPS
Figure-C
7.3 Processes:
This tab gives the information about the Agents, their worker processes and associated threads.
The listed headers under this tab are
1. Process 2. Type 3. State
Figure c
7.4 Scripts:
This tab contains the beginnings of console support for script editing and also controls for the script distribution system.
Set the root directory for script distribution
The directory on the console host that contains the scripts for distribution
Set the script to run
This selects the script from those in the distributed list that is to be run
Send changed files to worker processes
This pushes out the contents of the root directory to all connected worker processes
Figure d
8. Creation of- grinder. properties file:
This file resides on all the machines that run an agent process
Each test context runs in its own thread. The threads can be split over many processes depending on the requirements of the test and the capabilities of the load injection machine.
(The number varies depending on the type of test client).
This is a configuration file that is read by the agent and worker processes, and the plug-in and is very important in the Grinder working
This file contains all the information necessary to run a particular set of tests, such as the number of worker processes, the number of worker threads, and the plug-in to use
For most plug-ins, the file also specifies the tests to run and can be thought of as the "test script." For example, when using the HTTP plug-in, the grinder.properties file contains the URL for each test.
The agent process and the worker processes read their configuration from grinder.properties when they are started.
Each context simulates an active user session. The number of contexts is given by the following formula:
(Number of agent processes) x (Number of worker processes)
x (Number of worker threads)
Figure e
The Grinder worker and agent processes are controlled by setting properties in the grinder.properties file.
Starting The Grinder agent process without a grinder.properties file will lead to using the default addresses, use one worker process, one thread, and make one run through the test script found in the file grinder.py
Overall, this file consists of all the properties which are understood by ‘The Grinder’ engine.
Below mentioned are some of the properties with their description and default values listed.
Table 1- Grinder Properties
-
Property
Description
Default
grinder.processes
The number of worker processes the agent should start
1
grinder.threads
The number of worker processes that each worker process spawns
1
grinder.runs
The number of runs of the test script each thread performs
1
grinder.receiveConsoleSignal
Set to true to respond to console signals.
TRUE
grinder.consoleAddress
The IP address or hostname to use for communication from the Grinder processes to the console. Default is all the network interfaces of the local machine.
TRUE
grinder.consolePort
The IP port to use for communication from the Grinder processes to the console.
6372
grinder.plugin
The plugin class to use. Currently each script uses a single plugin.
grinder.logDirectory
Directory to write log files to. Created if it doesn't already exist
grinder.hostID
Override the "host" string used in log filenames and logs.
The host name
grinder.processIncrement
If set, the agent will ramp up the number of worker processes, starting the number specified every grinder.processesIncrementInterval milliseconds.
Start all worker processes together
grinder.processIncrementInterval
Used in conjunction with grinder.processIncrement, this property sets the interval in milliseconds at which the agent starts new worker processes.
60000 ms
grinder.initialProcesses
Used in conjunction with grinder.processIncrement, this property sets the initial number of worker processes to start.
The value of the grinder.
Process
increment
grinder.duration
The maximum length of time in milliseconds that each worker process should run for. grinder.duration can be specified in conjunction with grinder.runs, in which case the worker processes will terminate if either the duration time or the number of runs is exceeded.
Run forever.
9. HTTPPlugin:
The HTTPPlugin is a mature plug-in for testing HTTP services.
It has a number of utilities useful for HTTP scripts as well as a tool, TCPProxy.
It's quite feasible to have HTTP plug-in grinder.properties test scripts containing hundreds or thousands of individual tests.
10. TCPProxy:
The Grinder 3.0 is shipped with a tool, the TCPProxy that can automatically capture test-script entries corresponding to the HTTP requests a user makes using a browser, and generate corresponding test-script entries.
The TCP Proxy is configured to sit between the user's browser and the target server and capture all the requests the browser makes before proxying the requests on to the server.
You can start the TCP Proxy in a special mode in which it outputs a recording of the requests you make with the browser as a full grinder.properties test script. You can then take this test script and replay it using The Grinder.
It is useful for recording scripts and as a debugging tool.
The TCP proxy can be configured as an HTTP/HTTPS proxy for easy integration with web browsers.
The TCP proxy can simulate SSL sessions.
11. Recording a Script for a Web Application:
Pre-conditions:
11.1 Proxy Settings:
(E.g. According to performance plan for trakstar we have recorded all the transactions mentioned into one activity as a single script. Thus we have recorded all the four activities for one scenario. For this process we have used following methods.)
First, set up IE temporarily to use a proxy server.
Menu -> Tools -> Internet Options -> Connections tab
Click on LAN Settings button
Check the use of proxy server for your LAN setting
- Click on Advanced button
- Set the http proxy address to use as localhost, port 8001
- Set the secure proxy address to use as localhost, port 8001
Figure f
11.2 TCPProxy:
Fire off the TCPProxy within the Grinder using following command on command prompt-
C:\Test>Java net.grinder.TCPProxy -console -http >Activity1.py
Surf away with the IE session, and you will have all actions recorded.
Click Stop Recording on the TCPProxy Window, when you are done.
You will notice within the directory there is a Activity1.py file. This are your test scripts.
11.3 Steps for recording a script for web application:
Set the Grinder environment (setGrinderEnv.cmd)
Start TCPProxy (startProxy.cmd)
Perform the user actions on the website and intern they are recorded
Stop the TCPProxy
A file with the recorded script having extension .py is created on the grinder environment path.
11.4 Running the Test Scripts:
11.4.1 Method 1: Executing Multiple Agents on Different Machines:
To run the recorded script, the console and the agent processes should be initiated first and then the user has to set working directory in the console through the set directory option. Once the directory is set, the user can select the script that needs to be run. The name of the script that needs to be executed should be included in the properties file within the property
grinder.script = script name
E.g-
grinder.script=Activity1.py
The user needs to set the script for the engine to run and then start the processes. This is explained action-wise below
Set up resource monitoring with performance monitor on the server, and client machine.
In the same directory have a grinder.properties, this file contains the configuration settings
E.g –
grinder.processes=1
grinder.threads=2
grinder.cycles=1
grinder.useConsole=true
grinder.consoleHost=192.168.8.215
grinder.consolePort=6372
grinder.logDirectory=log
grinder.appendLog=false
grinder.initialSleepTime=500
grinder.sleepTimeFactor=0.01
grinder.sleepTimeVariation=0.005
grinder.script=Activity1.py
Also within that directory, have the script you would like to run. E.g Find the Activity1.py file.
Start the console from the directory where you are going to store the result of log files on one machine.
E.g.
C:\Test>Java net.grinder.Console
‘Set the root directory’ for the script distribution on the CONSOLE host where the script is listed.
‘Set the script’ of console for running the script on Console host under the scripts tab.
Start the agent on individual machines using following command. (startAgent.cmd) [It will show the message- “Waiting for console signal”]
E.g
C:\Test>Java net.grinder.Grinder
Click ‘Send changed files to worker processes’ to distribute files from the root directory in the console.
Start the processes (Action Menu =>Start Processes)
Finished [Waiting for console signal]
(Note- In order to store all the results in the same directory, use that directory at command prompt to execute the Console or Agent)
11.4.2 Method 2: Executing Multiple Agents on One Machine:
It is possible to execute the whole scenario from one machine using grinder. In that case we have to baseline the folder structure.
E.g. If there is folder named “Scenario” and subfolders are “Activity 1”, “Activity 2” etc. In this case there should be individual script and grinder.properties file in that particular folder. E.g. Activity1.py and Activity1.properties should be included in the “Activity 1” folder.
Start the console and then open the different instances from command prompt for each activity to be executed. Go to that folder and execute the agent processes for that particular folder using following command.
E.g. The activity1.py script can be executed as-
C:\Scanario\Activity1>Java net.grinder.Grinder
Then in console we have to set the working directory. (Don’t set the script to be executed). Now we can execute the worker processes in the console.
The statistical log data will be collected in the particular activity script folder. Console does have the consolidated data for all the activities executed. One can save this data and analyze the results.
11.5 Results:
In the log files generated we get the text files as shown in the following shows the log file data generated in data_pc1.log (Only sample data and not complete data) and shows the log file data generated in out_pc1.log. contents the data gathered in console.
Using this data one can analyze and generate the reports.
-
Thread
Run
Test
Milliseconds since start
Test time
Errors
HTTP response code
HTTP response length
HTTP response errors
Time to resolve host
Time to establish connection
Time to first byte
1
0
101
47
1391
0
200
8827
0
15
31
1359
1
0
100
1453
16
0
304
0
0
0
0
0
In the above , the number of “Threads” and “Runs” are executed in the combination for the individual tests. Tests can be identified in the .py script files. Every individual Test is the request for the object recorded. It is identified by its number in the script. So each Test is executed twice for Thread and Run in combination (E.g Here Threads are mentioned as 2 and Runs as 1. So each Test is executed for Thread-0, Run-0 and Thread-1, Run-0). The column “Milliseconds since start” shows the response time in milliseconds taken by that Test since starting from the execution. “Test time” shows the complete test time in milliseconds taken by that Test to execute. “Errors” column shows the number of errors occurred during execution of that individual Test. “HTTP response code” shows the response behavior of that Test in code. (Please refer HTTP Response codes for this.) “HTTP response errors” shows any errors occurred during the execution of that individual Test.
-
Test No
Tests
Errors
Mean Test
Time (ms)
Test Time
Standard
Deviation
(ms)
Mean
response
length
Response
bytes per
second
Response
errors
Mean time to
resolve host
Mean time to
establish
connection
Mean time to
first byte
Test
100
2
0
1523.5
70.5
0
?
0
0
0
0
"Page 1"
Test
101
2
0
1453.5
62.5
8827
?
0
15.5
23.5
1422
"GET
Main.php"
The data in out_pc1.log is the consolidated result for the number of Tests. (E.g “Test 100” is executed for two times i.e for Thread-0, Run-0 and Thread-1, Run-0). Here the “Test 100” is executed for 2 times and it is shown in “Tests” column. The “Errors” column shows the total number of errors occurred during execution of that complete test. “Mean Test Time” shows the mean time in milliseconds taken by that Test to execute. “Response bytes per seconds” shows the mean of response bytes per second for that Test. The column “Response errors” shows the total HTTP response errors occurred during the execution of complete individual Test.
-
Test
Description
Successful Tests
Errors
Mean Time
Mean Time Standard Deviation
TPS
Peak TPS
Mean Response Length
Response Bytes Per Second
Response Errors
Mean time to resolve host
Mean time to establish connection
Mean time to first byte
Test 100
Page 1
7
0
4530
1510
0.0128
3
0
0
0
0
0
0
Test 101
GET Main.php
7
0
4460
1510
0.0128
3
8830
113
0
26.7
462
4410
The sample data generated in the console can be captured for any individual time while execution of script. It can be saved in a default CSV file named “grinder-console.data” during execution. This data is similar to the data in “Out_pc1.log” file. But using this data basically one can identify and evaluate the “Mean Time”, “TPS” and “Peak TPS” for individual Test sample.
11.5.1 Result Analysis:
Using the data from data_pc1.log and out_pc1.log files as well as console host, under the results tab, one can analyze the results as follows.
Analyse the Response time, Throughput and Error using the grinder results.
Mean time in the console is response time.
Bytes per sec is the throughput and error can be analyzed using the parameters response error, error and successful tests.
Plot the graphs and analyse the results.
(Note-We have collected all the log files data generated on individual agent machines and console captured data for scenario 1 as in performance plan. Then we have consolidated that data in excel sheet as shown Table 4 in and plotted the graphs.)Table 4 – Consolidated Data
Table 5- Consolidated Data
-
Activities Number
Tests
Errors
Mean Test Time (ms)
Test Time Standard Deviation (ms)
Mean Response Length
Response bytes per second
Response errors
Mean time to resolve host
Mean time to establish connection
Mean time to first byte
Total_Act1
22
0
3138.77
3074.56
2143.82
?
0
3.55
3.55
1515.41
Total_Act2
44
0
2258.66
2509.04
1578
?
0
0.7
1.07
1108.5
Total_Act3
106
0
1099.16
2075.97
737.09
?
0
0.58
29.2
542.58
Total_Act4
114
0
1139.96
1954.86
881.04
?
0
0.27
0.41
559.81
The result can be analysed by finding Throughput and Mean Test time as shown in fig a & fig b of graphs.
The graph in the Graph 1 shows “Mean Test time” taken by that individual activity to execute all the tests of that script. Here the Activity_1 has taken 3138.77 milliseconds time to execute all the 22 Tests mentioned in that script. Activity_2 has taken 2258.66 milliseconds time to execute all the 44 Tests mentioned in that script. Activity_3 has taken 1099.16 milliseconds time to execute all the 106 Tests mentioned in that script. Activity_4 has taken 1139.96 milliseconds time to execute all the 114 Tests mentioned in that script.
Graph 1
The graph in Graph 2 shows “Response Bytes per Second” taken by that individual activity to execute all the Tests of that script. Here the “Response bytes per second” column is not collecting the data in log files. So we have taken the sample data from Console and plotted the sample graph for throughput. If the data is generated in log files, it will be better for conclusion. So it is not included in conclusion.
Graph 2
No Errors, Response Errors are generated for 8 virtual users on 4 different agents of 4 individual machines. All the Tests are shown as “Successful Tests” in console.
Conclusion: According to this data and graph in fig a, the Activity 1 and Activity 2 scripts are taking more time to execute the only 22 and 44 tests as compared to other activities scripts. It should be reduced and balanced to that level of other activities.