Book Image

Django 1.1 Testing and Debugging

Book Image

Django 1.1 Testing and Debugging

Overview of this book

Bugs are a time consuming burden during software development. Django's built-in test framework and debugging support help lessen this burden. This book will teach you quick and efficient techniques for using Django and Python tools to eradicate bugs and ensure your Django application works correctly. This book will walk you step by step through development of a complete sample Django application. You will learn how best to test and debug models, views, URL configuration, templates, and template tags. This book will help you integrate with and make use of the rich external environment of test and debugging tools for Python and Django applications. The book starts with a basic overview of testing. It will highlight areas to look out for while testing. You will learn about different kinds of tests available, and the pros and cons of each, and also details of test extensions provided by Django that simplify the task of testing Django applications. You will see an illustration of how external tools that provide even more sophisticated testing features can be integrated into Django's framework. On the debugging front, the book illustrates how to interpret the extensive debugging information provided by Django's debug error pages, and how to utilize logging and other external tools to learn what code is doing.
Table of Contents (17 chapters)
Django 1.1 Testing and Debugging
Credits
About the Author
About the Reviewer
Preface
Index

Command line options for running tests


Beyond specifying the exact applications to test on the command line, what other options are there for controlling the behavior of manage.py test? The easiest way to find out is to try running the command with the option --help:

kmt@lbox:/dj_projects/marketr$ python manage.py test --help
Usage: manage.py test [options] [appname ...]

Runs the test suite for the specified applications, or the entire site if no apps are specified.

Options:
  -v VERBOSITY, --verbosity=VERBOSITY
                      Verbosity level; 0=minimal output, 1=normal output,
                      2=all output
  --settings=SETTINGS   The Python path to a settings module, e.g.
                        "myproject.settings.main". If this isn't provided, the
                        DJANGO_SETTINGS_MODULE environment variable will 
                        be used.
  --pythonpath=PYTHONPATH
                        A directory to add to the Python path, e.g.
                        "/home/djangoprojects/myproject".
  --traceback           Print traceback on exception
  --noinput             Tells Django to NOT prompt the user for input of 
                        any kind.
  --version             show program's version number and exit
  -h, --help            show this help message and exit

Let's consider each of these in turn (excepting help, as we've already seen what it does):

Verbosity

Verbosity is a numeric value between 0 and 2. It controls how much output the tests produce. The default value is 1, so the output we have seen so far corresponds to specifying -v 1 or --verbosity=1. Setting verbosity to 0 suppresses all of the messages about creating the test database and tables, but not summary, failure, or error information. If we correct the last doctest failure introduced in the previous section and re-run the tests specifying -v0, we will see:

kmt@lbox:/dj_projects/marketr$ python manage.py test survey -v0 
====================================================================== 
ERROR: test_basic_addition (survey.tests.SimpleTest) 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "/dj_projects/marketr/survey/tests.py", line 15, in test_basic_addition 
    self.failUnlessEqual(1 + 1, sum_args(1, 1)) 
NameError: global name 'sum_args' is not defined 

---------------------------------------------------------------------- 
Ran 2 tests in 0.008s 

FAILED (errors=1) 

Setting verbosity to 2 produces a great deal more output. If we fix this remaining error and run the tests with verbosity set to its highest level, we will see:

kmt@lbox:/dj_projects/marketr$ python manage.py test survey --verbosity=2 
Creating test database... 
Processing auth.Permission model 
Creating table auth_permission 
Processing auth.Group model 
Creating table auth_group 

[...more snipped...]

Creating many-to-many tables for auth.Group model 
Creating many-to-many tables for auth.User model 
Running post-sync handlers for application auth 
Adding permission 'auth | permission | Can add permission' 
Adding permission 'auth | permission | Can change permission' 

[...more snipped...]

No custom SQL for auth.Permission model 
No custom SQL for auth.Group model 

[...more snipped...]

Installing index for auth.Permission model 
Installing index for auth.Message model 
Installing index for admin.LogEntry model 
Loading 'initial_data' fixtures... 
Checking '/usr/lib/python2.5/site-packages/django/contrib/auth/fixtures' for fixtures... 
Trying '/usr/lib/python2.5/site-packages/django/contrib/auth/fixtures' for initial_data.xml fixture 'initial_data'... 
No xml fixture 'initial_data' in '/usr/lib/python2.5/site-packages/django/contrib/auth/fixtures'. 

[....much more snipped...]
No fixtures found. 
test_basic_addition (survey.tests.SimpleTest) ... ok 
Doctest: survey.tests.__test__.doctest ... ok 

---------------------------------------------------------------------- 
Ran 2 tests in 0.004s 

OK 
Destroying test database...

As you can see, at this level of verbosity the command reports in excruciating detail all of what it is doing to set up the test database. In addition to the creation of database tables and indexes that we saw earlier, we now see that the database setup phase includes:

  1. Running post-syncdb signal handlers. The django.contrib.auth application, for example, uses this signal to automatically add permissions for models as each application is installed. Thus you see messages about permissions being created as the post-syncdb signal is sent for each application listed in INSTALLED_APPS.

  2. Running custom SQL for each model that has been created in the database. Based on the output, it does not look like any of the applications in INSTALLED_APPS use custom SQL.

  3. Loading initial_data fixtures. Initial data fixtures are a way to automatically pre-populate the database with some constant data. None of the applications we have listed in INSTALLED_APPS make use of this feature, but a great deal of output is produced as the test runner looks for initial data fixtures, which may be found under any of several different names. There are messages for each possible file that is checked and for whether anything was found. This output might come in handy at some point if we run into trouble with the test runner finding an initial data fixture (we'll cover fixtures in detail in Chapter 3), but for now this output is not very interesting.

Once the test runner finishes initializing the database, it settles down to running the tests. At verbosity level 2, the line of dots, Fs, and Es we saw previously is replaced by a more detailed report of each test as it is run. The name of the test is printed, followed by three dots, then the test result, which will either be ok, ERROR, or FAIL. If there are any errors or failures, the detailed information about why they occurred will be printed at the end of the test run. So as you watch a long test run proceeding with verbosity set to 2, you will be able to see what tests are running into problems, but you will not get the details of the reasons why they occurred until the run completes.

Settings

You can pass the settings option to the test command to specify a settings file to use instead of the project default one. This can come in handy if you want to run tests using a database that's different from the one you normally use (either for speed of testing or to verify your code runs correctly on different databases), for example.

Note the help text for this option states that the DJANGO_SETTINGS_MODULE environment variable will be used to locate the settings file if the settings option is not specified on the command line. This is only accurate when the test command is being run via the django-admin.py utility. When using manage.py test, the manage.py utility takes care of setting this environment variable to specify the settings.py file in the current directory.

Pythonpath

This option allows you to append an additional directory to the Python path used during the test run. It's primarily of use when using django-admin.py, where it is often necessary to add the project path to the standard Python path. The manage.py utility takes care of adding the project path to the Python path, so this option is not generally needed when using manage.py test.

Traceback

This option is not actually used by the test command. It is inherited as one of the default options supported by all django-admin.py (and manage.py) commands, but the test command never checks for it. Thus you can specify it, but it will have no effect.

Noinput

This option causes the test runner to not prompt for user input, which raises the question: When would the test runner require user input? We haven't encountered that so far. The test runner prompts for input during the test database creation if a database with the test database name already exists. For example, if you hit Ctrl + C during a test run, the test database may not be destroyed and you may encounter a message like this the next time you attempt to run tests:

kmt@lbox:/dj_projects/marketr$ python manage.py test 
Creating test database... 
Got an error creating the test database: (1007, "Can't create database 'test_marketr'; database exists") 
Type 'yes' if you would like to try deleting the test database 'test_marketr', or 'no' to cancel: 

If --noinput is passed on the command line, the prompt is not printed and the test runner proceeds as if the user had entered 'yes' in response. This is useful if you want to run the tests from an unattended script and ensure that the script does not hang while waiting for user input that will never be entered.

Version

This option reports the version of Django in use and then exits. Thus when using --version with manage.py or django-admin.py, you do not actually need to specify a subcommand such as test. In fact, due to a bug in the way Django processes command options, at the time of writing this book, if you do specify both --version and a subcommand, the version will get printed twice. That will likely get fixed at some point.