Book Image

Django 1.1 Testing and Debugging

Book Image

Django 1.1 Testing and Debugging

Overview of this book

Bugs are a time consuming burden during software development. Django's built-in test framework and debugging support help lessen this burden. This book will teach you quick and efficient techniques for using Django and Python tools to eradicate bugs and ensure your Django application works correctly. This book will walk you step by step through development of a complete sample Django application. You will learn how best to test and debug models, views, URL configuration, templates, and template tags. This book will help you integrate with and make use of the rich external environment of test and debugging tools for Python and Django applications. The book starts with a basic overview of testing. It will highlight areas to look out for while testing. You will learn about different kinds of tests available, and the pros and cons of each, and also details of test extensions provided by Django that simplify the task of testing Django applications. You will see an illustration of how external tools that provide even more sophisticated testing features can be integrated into Django's framework. On the debugging front, the book illustrates how to interpret the extensive debugging information provided by Django's debug error pages, and how to utilize logging and other external tools to learn what code is doing.
Table of Contents (17 chapters)
Django 1.1 Testing and Debugging
Credits
About the Author
About the Reviewer
Preface
Index

Running the sample tests


The comment at the top of the sample tests.py file states that the two tests: will both pass when you run "manage.py test". So let's see what happens if we try that:

kmt@lbox:/dj_projects/marketr$ python manage.py test 
Creating test database... 
Traceback (most recent call last): 
  File "manage.py", line 11, in <module> 
    execute_manager(settings) 
  File "/usr/lib/python2.5/site-packages/django/core/management/__init__.py", line 362, in execute_manager 
    utility.execute() 
  File "/usr/lib/python2.5/site-packages/django/core/management/__init__.py", line 303, in execute 
    self.fetch_command(subcommand).run_from_argv(self.argv) 
  File "/usr/lib/python2.5/site-packages/django/core/management/base.py", line 195, in run_from_argv 
    self.execute(*args, **options.__dict__) 
  File "/usr/lib/python2.5/site-packages/django/core/management/base.py", line 222, in execute 
    output = self.handle(*args, **options) 
  File "/usr/lib/python2.5/site-packages/django/core/management/commands/test.py", line 23, in handle 
    failures = test_runner(test_labels, verbosity=verbosity, interactive=interactive) 
  File "/usr/lib/python2.5/site-packages/django/test/simple.py", line 191, in run_tests 
    connection.creation.create_test_db(verbosity, autoclobber=not interactive) 
  File "/usr/lib/python2.5/site-packages/django/db/backends/creation.py", line 327, in create_test_db 
    test_database_name = self._create_test_db(verbosity, autoclobber) 
  File "/usr/lib/python2.5/site-packages/django/db/backends/creation.py", line 363, in _create_test_db 
    cursor = self.connection.cursor() 
  File "/usr/lib/python2.5/site-packages/django/db/backends/dummy/base.py", line 15, in complain 
    raise ImproperlyConfigured, "You haven't set the DATABASE_ENGINE setting yet." 
django.core.exceptions.ImproperlyConfigured: You haven't set the DATABASE_ENGINE setting yet.

Oops, we seem to have gotten ahead of ourselves here. We created our new Django project and application, but never edited the settings file to specify any database information. Clearly we need to do that in order to run the tests.

But will the tests use the production database we specify in settings.py? That could be worrisome, since we might at some point code something in our tests that we wouldn't necessarily want to do to our production data. Fortunately, it's not a problem. The Django test runner creates an entirely new database for running the tests, uses it for the duration of the tests, and deletes it at the end of the test run. The name of this database is test_ followed by DATABASE_NAME specified in settings.py. So running tests will not interfere with production data.

In order to run the sample tests.py file, we need to first set appropriate values for DATABASE_ENGINE, DATABASE_NAME, and whatever else may be required for the database we are using in settings.py. Now would also be a good time to add our survey application and django.contrib.admin to INSTALLED_APPS, as we will need both of those as we proceed. Once those changes have been made to settings.py, manage.py test works better:

kmt@lbox:/dj_projects/marketr$ python manage.py test 
Creating test database... 
Creating table auth_permission 
Creating table auth_group 
Creating table auth_user 
Creating table auth_message 
Creating table django_content_type 
Creating table django_session 
Creating table django_site 
Creating table django_admin_log 
Installing index for auth.Permission model 
Installing index for auth.Message model 
Installing index for admin.LogEntry model 
................................... 
---------------------------------------------------------------------- 
Ran 35 tests in 2.012s 

OK 
Destroying test database...

That looks good. But what exactly got tested? Towards the end it says Ran 35 tests, so there were certainly more tests run than the two tests in our simple tests.py file. The other 33 tests are from the other applications listed by default in settings.py: auth, content types, sessions, and sites. These Django "contrib" applications ship with their own tests, and by default, manage.py test runs the tests for all applications listed in INSTALLED_APPS.

Note

Note that if you do not add django.contrib.admin to the INSTALLED_APPS list in settings.py, then manage.py test may report some test failures. With Django 1.1, some of the tests for django.contrib.auth rely on django.contrib.admin also being included in INSTALLED_APPS in order for the tests to pass. That inter-dependence may be fixed in the future, but for now it is easiest to avoid the possible errors by including django.contrib.admin in INTALLED_APPS from the start. We will want to use it soon enough anyway.

It is possible to run just the tests for certain applications. To do this, specify the application names on the command line. For example, to run only the survey application tests:

kmt@lbox:/dj_projects/marketr$ python manage.py test survey 
Creating test database... 
Creating table auth_permission 
Creating table auth_group 
Creating table auth_user 
Creating table auth_message 
Creating table django_content_type 
Creating table django_session 
Creating table django_site 
Creating table django_admin_log 
Installing index for auth.Permission model 
Installing index for auth.Message model 
Installing index for admin.LogEntry model 
.. 
---------------------------------------------------------------------- 
Ran 2 tests in 0.039s 

OK 
Destroying test database... 

There—Ran 2 tests looks right for our sample tests.py file. But what about all those messages about tables being created and indexes being installed? Why were the tables for these applications created when their tests were not going to be run? The reason for this is that the test runner does not know what dependencies may exist between the application(s) that are going to be tested and others listed in INSTALLED_APPS that are not going to be tested.

For example, our survey application could have a model with a ForeignKey to the django.contrib.auth User model, and tests for the survey application may rely on being able to add and query User entries. This would not work if the test runner neglected to create tables for the applications excluded from testing. Therefore, the test runner creates the tables for all applications listed in INSTALLED_APPS, even those for which tests are not going to be run.

We now know how to run tests, how to limit the testing to just the application(s) we are interested in, and what a successful test run looks like. But, what about test failures? We're likely to encounter a fair number of those in real work, so it would be good to make sure we understand the test output when they occur. In the next section, then, we will introduce some deliberate breakage so that we can explore what failures look like and ensure that when we encounter real ones, we will know how to properly interpret what the test run is reporting.