Test Suite
The library comes with a
test suite which can be used for verifying installation. It is also
useful for the developers who want to make changes in the library.
The suite is located in the test subdirectory and contains the following files and folders:
test/run_test.py: This is the driver for the test suite.
test/v2-test-cases: This folder contains a set of test cases for python2.
test/v3-test-cases: This folder contains a set of test cases for python3.
run_test.py
run_test is
a utility, implemented in python, which can be used as a driver for the
test suite. Using run_test, one or more test program can be executed.
run_test executes the test script and compares the results with the
expected results. If they do not match, then test is considered as Failed and the differences
between actual and expected results are printed out. If they match, the test is considered as Passed.
Each test program is a python program which tests one or more features
of a module. The result of each test can be a sentence or a value or a
set of values and
should be written in standard output. Thus, running a
test program may generate one or more lines of text. Each result
(line) starts with a tag. If a test fails, the tag
helps the tester
to easily find out which part of the test program has failed. The tag contains function names, line
numbers and in some cases other information which
is helpful in locating the error. The results should not contains values which might change in
different runs. In other words, a test program should always generate
the same result(s). For each test program, an expected result file should be created. This
is a text file contains the output which is expected to be generated by
the
test program. It can be created by using run_test or
by redirecting the standard output of the test program to a file. The
result file has the same name as the test program,
but the
extension is res. Test program names start with test-. For example, if name of the test program is
test-tcp.py, name of the expected result file should be test-tcp.res.
A set of command-line options/arguments can be passed to run_test to control its behaviour:
run_test –d <dir> [-t|-g] [-r] [-n <num>] [-f <file>] [-w <working-dir>]
-d <dir>: the folder which contains test program(s).
-g: generate result file(s) and stores it/them in the same folder as the test program(s).
-t: runs test program(s) and compares the result(s) with the expected result(s) (.res).
-r: recursively goes through <dir> and traverses all subfolders.
-n <num>: number of times
that the test(s) should run (can be used with –t only). Default is 1.
If num=0, it runs the test(s) in an infinite loop (suitable for
regression test).
-f <file>: runs a single test program <dir>/<file>.
-w <working-dir>: working directory where temporary files are created. Default is /tmp.
When run_test runs with –t, it runs the test program and stores the
output in a temporary file. Then it compares the temporary file with
the expected result file for the test program,
If there is not difference, the test is considered as Passed otherwise the test is considered as Failed and the differences are shown on the standard output so that it can be redirected
to a file if the tester would like to run the test off-line.
v2-test-cases and v3-test-cases subdirectories
These folders contain a set of test cases, result files and packet
traces.Trace files are used in the test cases. For each test case,
there is a result file which contains the expected
output for that test case. v2-test-cases contains test cases which can
be run with python2 and v3-test-cases contains test cases for python3.
Functionality of the test cases are the
same
for python2 and python3, the only difference is the syntax. There are
some differences, in syntax, between python2 and python3.
run_test examples
Following examples show how run_test can be used:
<plt-folder>/test$
python run_test -t -d v2-test-cases/
# runs all test cases exist in v2-test-cases
<plt-folder>/test$
python run_test -t -d v2-test-cases/ -f
test-ip.py # runs test-ip.py and compares the actual output with
the expected results (test-ip.res)
<plt-folder>/test$ python
run_test -g -d v2-test-cases/ -f test-ip.py # re-generates test-ip.res
<plt-folder>/test$
python run_test -g -d v2-test-cases/ #
re-generates .res files for all test programs inside the v2-test-cases
folder
<plt-folder>/test$
python run_test -t -d v2-test-cases/ –n
0 # runs all tests inside the v2-test-cases folder in an
infinite loop
similar tests can be run for v3-test-cases:
<plt-folder>/test$
python3 run_test -t -d v3-test-cases/
# runs all test cases exist in v3-test-cases
<plt-folder>/test$
python3 run_test -t -d v3-test-cases/ -f
test-ip.py # runs test-ip.py and compares the actual output with
the expected results (test-ip.res)
<plt-folder>/test$ python3
run_test -g -d v3-test-cases/ -f test-ip.py # re-generates test-ip.res
<plt-folder>/test$
python3 run_test -g -d v3-test-cases/ #
re-generates .res files for all test programs inside the v3-test-cases
folder
<plt-folder>/test$
python3 run_test -t -d v3-test-cases/ –n
0 # runs all tests inside the v3-test-cases folder in an
infinite loop
Habib Naderi
Tue, 10 Jul 14 (PDT)