Skip to content
NickolausDS edited this page Dec 16, 2011 · 1 revision

Unit Testing

Why Unit Tests are Important

Writing Unit Tests for your code has a number of benefits, and very few drawbacks. When you are writing a complicated piece of code, and you have the concept designed enough to know what your functions and components are supposed to do, you can write tests before you write them. Then, you can run your tests as you go, and when they pass you know you are done (or you need to think of more cases to test). This is especially useful when you need to start up the mud server, create a character, create an area, items, etc just to see if your most recent change had the effect you wanted... Or, you just run the tests and it will tell you whether it worked in a couple of seconds. They are also useful for letting you know when your changes broke something, and for making sure that bugs you fixed stay fixed.

Getting Started

First thing you will need is to install Nose. If you have already have Python 2.6 and easy_install (SetupTools), and don't want to set up a virtual environment then you should be able to just say:

easy_install nose 

(you may need to run this with root permissions)

Hopefully everything went well, and you can go to the project level directory of ShinyMud (be in the folder that has the README) and type:

nosetests

This should show you something like this:

............
----------------------------------------------------------------------
Ran 12 tests in 0.255s

OK

If you get something similar to this, you can skip down to the Understanding Test Results section below. If you have any other issues with installing Nose, you are having weird issues with nosetests (like it is trying to run in Python 2.5 instead of 2.6), or you are just curious, stick around and I'll show you how to setup a virtual coding environment.

Coding in a Virtual Environment

Once upon a time there were some awesome coders. As you know, awesome coders work on many different projects, and sometimes work on several at the same time. These awesome coders were working on one project that had to run using an older version of a library, and one project that required a newer version. It was totally annoying having both installed, and having to change a bunch of settings around whenever they switched between projects, so these awesome coders decided to build virtual coding environments. Within each environment, all of their dependencies for one project could be met without interfering with the others.

To set up your virtual environment, you need to download this (or just google around for the latest version of virtualenv.py). Once it is downloaded, untar it and grab the virtualenv.py file out of it (you can go ahead and delete everything else in there, we just need the one file). Now, you are going to need to pick a location to keep your virtual environment (I recommend in your home or sandbox directory) and copy/move the virtualenv.py file to that location. Next, go to that directory in your terminal and run the following command:

python2.6 virtualenv.py --no-site-packages ENV  

(ENV is the name of the environment directory you are making, so you could call it ShinyEnv, or something more specific to the project if you want.)

There should now be a directory there named ENV (or whatever you called it). Inside is a brand spanking new version of Python 2.6, with it's own site-packages and easy_install ready to go. Now, whenever you want to work within your virtual environment just say:

source ENV/bin/activate

where ENV is your environment directory. Your prompt should change to remind you that you are in a virtual environment. Now anytime you run a command from that terminal it will first check for that command in your virtual environment, before searching your regular commands. When you want to leave that virtual environment you can just use the deactivate command, or open a new terminal.

Whew! Ok, you should have a working environment now, so now you can try easy_install nose again (make sure your virtual environment is activated in the terminal you're using), and then calling nosetests in the project directory.

Understanding Test Results

So, you have Nose installed, and your tests seem to be running, but what does it all mean? Presumably if everything is working you should see something like:

............
----------------------------------------------------------------------
Ran 12 tests in 0.255s

OK

Each of the tests that is run will return a icon based on its result. A line (or many lines) of icons will show up in your output, followed by a line of dashes and then a description of the results.
The possible icons are these:

  • "." This means the test passed successfully! Everything is working fine.
  • "E" The code (or test) raised an unexpected Exception. This is bad. Either your test is doing something wrong, or the code is failing in an unpredicted way. Whenever we get an Exception (E) we will get a traceback telling where in the code the exception occurred.
  • "F" The Test Failed. This is kinda bad, but means one of the things we test for has failed, and we need to fix something, but we know what it is. When this happens we will be given a message specific to which part of which test failed, so we can go see where it went wrong.

Error example:

E..
======================================================================
ERROR: test_something (commands.test_build_commands.TestBuildCommands)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/murph/sandbox/shinymud/tests/commands/test_build_commands.py", line 14, in test_something
    do_something(2)
  File "/Users/murph/sandbox/shinymud/src/shinymud/models/user.py", line 10, in do_something
    raise Exception("Oh fuck, something went wrong!")
Exception: Oh fuck, something went wrong!

----------------------------------------------------------------------
Ran 3 tests in 0.066s

FAILED (errors=1)

Failure example:

F..
======================================================================
FAIL: test_something (commands.test_build_commands.TestBuildCommands)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/murph/sandbox/shinymud/tests/commands/test_build_commands.py", line 14, in test_something
    self.assertEqual(55, "banana", 'Uh oh! 55 != "banana"')
AssertionError: Uh oh! 55 != "banana"

----------------------------------------------------------------------
Ran 3 tests in 0.065s

FAILED (failures=1)

Writing your first Unit Test

Probably the first thing you will notice is in the tests directory is we have a collection of directories and files grouping tests for similar things. Ideally this should look pretty parallel to our src directory, so that it is easy to find the test we wrote for a given piece of code, or the piece of code that relates to a certain test. Once we open one of these files, we see that we have a class that inherits from unittest.TestCase. Every test we create will be a function in one of these classes, and will have a name that begins with "test_". The class also has a setUp and a tearDown function, that will set any variables need for tests, and clean them up afterwards.

So lets begin with an example. Lets say that I need to write a function that takes in a string, and makes it look like a title. We can build a test to cover as many cases as we can think of, and when it passes we know that the function is doing what we expect it to do at this time. So, here is our test class, and our tests:

from wordfunctions import titleize
class TestWordFunctions(unittest.TestCase):
    def setUp(self):
        """don't need to do anything"""
        pass

    def tearDown(self):
        """don't need to do anything"""
        pass

    def test_titleize(self):
        a_string  = titlize("the neverending story")
        expected_result = "The Neverending Story"
        self.assertEqual(expected_result, a_string, "titleize failed: expected '%s', but got '%s'" % (expected_result, a_string))

We see here that we don't have any variables that we need to setup or teardown, we just stated one case to test for our titleize function, and a string to return if it fails. Even before we started writing the titleize function we have a pretty good idea of what it will need to do (capitalize words that aren't like 'a', 'the', 'of', etc). We can add more tests as we think of them. If they are similar we can add them to the same function. If they are for a different function, or are testing the same function in a different way we should write a separate function. Lets add a few more cases:

...
    def test_titleize(self):
        string1  = titleize("the neverending story")
        expected1 = "The Neverending Story"
        self.assertEqual(expected1, string1, "titleize failed: expected '%s', but got '%s'" % (expected1, string1))
        
        string2 = titleize("i've got a lovely bunch of coconuts")
        expected2 = "I've Got a Lovely Bunch of Coconuts"
        self.assertEqual(expected2, string2, "titleize failed: expected '%s', but got '%s'" % (expected2, string2))
...

Note that if we add it to the same function, it will show up as a single test that passes or fails (or errors out). If we write another function (called test_titleize2() or something) then it will have it's own icon in the results list, and be considered a separate test. It's up to you to decide which you want. Normally you would do about one test function for each path the code might take, and have that function try a number of values for that path. But keep the tests from getting complex or confusing. Remember: they're here to help.

Testing Policies

We currently don't have much of a policy as far as testing yet, but I will post a few guidelines:

  • Write tests as you go. I know we all forget, or think they aren't needed for every little change, but in the long run they help you code better. It helps you think of the different cases of your function, and makes you focus on what your function is ''supposed'' to do, instead of ''how'' it currently does it.
  • Write a test for each bug you fix. If you fixed something that was more than a typo, there is a good chance that someone who changes that piece of code later on might reintroduce the error you just fixed. If you put a test case in place checking for it, they will know as soon as they make changes (and the test fails).
  • Run the tests before you push your code. Nobody likes pulling the latest code fresh off the server and finding that nothing works. This is just smart, as there is an easy way to make sure that everything (that you know of) is working correctly, and keep your coworkers from freaking out when their stuff unexpectedly stops working.

More Information

If you want to know more about unit testing, checkout:

Clone this wiki locally