Skip to Content

Technology Blog

Technology Blog

Django and Continuous Integration

Recently updated on

There are a wide variety of reasons why we should worry about testing in our Django applications, obviously. However, as a former system administrator, I have some ideals that are absolutely engraved straight into my brain I think. The main idiom being:

"Never do the same thing three times. Do it twice, the third time? Script it!"

Hence, my main reason for unit tests and continious integration is, simply put, laziness. I hate manually checking the site I'm working on to make sure my last commit hasn't broken anything crucial. Then the next system administration idiom comes into play:

"If you just did something relatively complex completely from memory and by hand, you probably did something incorrectly."

So not only is my desire to be lazy not satisfied, now I'm also paranoid that I broke something critical.  Obviously, my secondary reasoning for proper testing is so I can sleep at night. You tend to overcheck when you're paranoid about a new feature or bug fix, and then you come to a point of second guessing yourself. That, in and of itself, isn't really a big deal, but it costs you time which costs somebody, somewhere, some money. As a developer, you probably don't think too much about the money part, and that's okay, I'm guilty of that, too. I do, however, think of how incredibly boring fake purchasing something from an ecommerce site I'm working on is, and how that's generally a waste of my time.

I personally hate being bored, so let's go ahead and chalk that up as reason number three for testing, and throw out another old idiom:

"The more scripts you write to rid yourself of reptitive tasks, the more time you have for games."

Now that we've established the "why" part of testing, it's time to tackle the "how" aspect. Thankfully, Django makes testing fairly straightforward and easy. First though, I highly recommend utilizing django-test-extensions. This allows you to get XML output of test results and code coverage analysis. The XML output in particular will come in handy if you decide to leverage a build server such as Hudson, since it can parse the XML output. At Imaginary Landscape we currently use Hudson as our Continous Integration/Build Server application. I know BuildBot is out there, and I don't have anything against it really. I know a lot of Python users like it because it's built with Python, but I don't like it because they don't do an excellent job of representing themselves. Their wiki currently has 1 screenshot that doesn't really show me much of their project and that one screenshot leaves a lot to be desired. I'm sure it serves its purpose quite well, but their documentation and general information didn't give me enough confidence to go diving into it. I have also had previous experience with Hudson, so I obviously had a bit of slant towards sticking with what I'm familiar with, of course.

Our setup involves having Hudson running as a non-priveledged user inside of a Tomcat container. To start it I simply run a script I have created as follows:

export JAVA_HOME=/PATH/TO/java/jdk1.6.0_16/
export CATALINA_OPTS="-DHUDSON_HOME=/PATH/TO/hudson/hudson_home/ -Xmx256m"
sudo su - build -c '/PATH/TO/apache-tomcat-6.0.20/bin/catalina.sh start'

If you're unsure of how to setup Java application in a container, I recommend reviewing the Hudson documentation. That should get you up to speed fairly quickly. Now you'll have Hudson running on port 8080 (across all interfaces in Linux), and for our situation that was fine. You can setup Apache and use ProxyPassAJP to proxy to port 8009 and serve out Hudson via Apache that way. There's also about 15 other ways to do that as well, whatever suits your fancy, that's another article for another day.

Now the tricky part begins. You have Hudson running, you have django-test-extensions installed, and, hopefully, you have some unit tests setup in your django application. With our latest project, an ecommerce site, I wanted to get as close to human testing as possible and thus chose to rebuild the site environment for each build. Basically, I chose to "deploy" our site for each build and run all the unit tests that are contained in that. It sounds fairly straightforward, and it is, it's just not incredibly easy. You'll definitely want to leverage virtualenv to its fullest for this.

You'll start off in Hudson by clicking "New Job." Then you'll choose to build a freestyle software project and you'll want to give this job a name. Now we have a slew of settings to choose from! I recommend "Discard Old Builds" to save some disk space, but choose constraints that you feel best suit your needs. You can setup SCM polling here as well with SVN or CVS. There's also a Git plugin that can be installed (Manage Hudson -> Plugins) if your team is currently using Git. The most important part comes at the "Build" section. Here you'll want to "Add build step", and I generally go with "Execute shell" (essentially executing a bash script).

Being comfortable with BASH will certainly make your life easier at this point. Mine starts off as follows:

cd $WORKSPACE
virtualenv PROJ_NAME_$BUILD_NUMBER
. PROJ_NAME_$BUILD_NUMBER/bin/activate

PROJ_NAME should be replaced with your choice of name

This builds me a virtualenv so that I don't have to worry about trashing my global site-packages and I also don't have to worry about permissions issues. The convenience of having a self contained environment makes this task a *lot* easier. After that section, I generally 'easy_install' all of my dependencies just to get them out of the way. You might consider it worthwhile to setup a small package repo of your own to prevent test failures due to network issues. Now since we're still in our $WORKSPACE directory, we have our repo checked out here. I personally poll the trunk only, so there's a "trunk" directory in my $WORKSPACE. So I follow my dependency installation with:

cd trunk
python2.6 setup.py build
python2.6 setup.py install
cd ../
cd PROJ_NAME_$BUILD_NUMBER

This will install my application that I'm currently concerned with into my Python path, so that it's ready for testing. Finally, I keep a location in my Hudson home path for my settings.py files that I'll need to copy into place to fire off my tests. After I have my settings files in place, I finish off the script with the testing:

django-admin.py test APPLICATIONS --xml --pythonpath=`pwd` --settings=settings
django-admin.py test APPLICATIONS --coverage --pythonpath=`pwd` --settings=settings

This will go through my tests for whichever applications I tell it to run, produce an XML report and then run a second time to produce a code coverage analysis. I don't recommend running the test without specifying the applications, because then it'll run the Django base unit tests as well which will skew your results with stuff you don't care about.  Now that we have our BASH script in place, there's one more setting we'll need to adjust in Hudson. Under Post-Build Actions, you'll want to select "Publish JUnit test result report". Then you'll want to point it at where your XMLs will end up, for instance:

PROJ_NAME_$BUILD_NUMBER/etc/django/PROJ_NAME_$BUILD_NUMBER/temp/xml/*.xml

Click save and you're ready to get rolling! If you're like me, your initial setup will probably hit some snags. If you're using PIL at all I recommend installing it globally, because it's a pain to install in a way that won't produce some form of error that will break your build. A bonus to all of this is that you end up with a fairly decent deployment plan or script at the end of it. You'll always know exactly what needs to be set up to get your project running in production.  Also, you'll know your tests are being run with each commit and you'll find bugs more quickly as a result. The less time you spend looking for bugs, the more time you can spend fixing them.

Keep in mind that this isn't the only method of testing your application. It's also a good idea to leverage tools like Selenium or Windmill to test your site with a real browser, so that you know your Ajax/JS is working properly on the client side as well. Then you'll also want to perform load testing to find potential bottlenecks in your application. I personally recommend Pylot for quick and dirty load testing (disclaimer: I've commited to this project before, so I'm biased. There are tons of alternatives to Pylot if you want to go that route). 


Share , ,
If you're getting even a smidge of value from this post, would you please take a sec and share it? It really does help.