Recently I’ve been interested in improving my coding practices, as everyone should be. There are a few new tools that I have been implementing that have already paid off in improving my end product. In this blog post, I will describe some of these tools and show how I got them up and running.

###Unit Testing The first thing that everyone should do is unit testing. Unit testing ensures that bits of the code behave the way they are expected to behave. This is done by writing simple test routines that analyze the output of the code of interest.

Below is a simple example.

Routine to test:

def multiply_numbers(num1, num2):
  return num1 * num2

Test routine:

def test_multiply_numbers_with_scalars(2, 2):
  res = multiply_numbers(2, 2)
  assert len(res) == 1

This test ensures that the result the multiply_numbers function is the length that we would expect it to be. For simple one-liners like multiply_numbers, there’s no need to test, but functions and methods in the real world are almost always more complicated, and as the complexity increases testing can save a ton of time that would otherwise be spent debugging.

There are many libraries out there for testing python code, but nose looked the most appealing to me. How to set nose up is detailed in their documentation. Personally, I used pip to install it:

pip install nose

I then created a “tests” directory in my repository root directory and started filling it with short test routines to test out every function and possibility of my code. Then to test your code with nose, simply type nosetests at the command line at the repository root level. nose searches through your repository looking for test code and executes the tests it finds. Here’s an example of nose successfully executing all 12 of test routines it found in my repository path:

my_project$ nosetests
Ran 12 tests in 0.014s


If you’re curious what other testing packages exist out there for python, I found the Hitchhiker’s Guide to Python to give an excellent introduction to unit testing in python.


Travis-CI is a continuous integration service. This allows for automated testing of code upon pushes to a github repository. This free tool for open source projects is great if you forget to run nosetests after adding new functionality to your code and also if your collaborating with others. It will probably take a little time to get travis setup for your particular needs. The Travis-CI docs are a good place to start, but I found their built in python versions did not satisfy all of the dependencies for just about any of my projects. Instead of building python and the dependencies from scratch (e.g. numpy, scipy, pandas, etc), has a terrific solution with miniconda. I found this documentation to be quite helpful in setting up travis-ci to work with my code. Once you get it working Travis-CI provides a pretty badge that you can put in your repository README to tell the world that your code has been tested, and (hopefully) it passed!

Here’s an example from one of my repositories of what the badge looks like.

Build Status

If you push a commit to github that doesn’t pass testing, it turns red and says “failed”. Don’t worry, you’ll also receive an email notification informing you of the change in status so you can quickly repair the damages.


Now you have testing setup, and you have travis-ci setup to perform continuous integration and make sure your latest commit is performing the way you expect it to! But how much of the code is actually being tested? Travis-CI will tell you if all of the tests you made ran successfully, but what if you are only testing a tiny fraction of one of your routines? That’s where coveralls comes in to play. Coveralls automatically analyzes your tests and code to see what fraction of your code is actually being covered by your tests. Not only does it check to see if all of your subroutines and methods are being tested, but if actually looks into all of the conditional cases to make sure all of the exceptional cases are being handled (i.e., it looks to make sure you’re testing all of the edge cases). Once you get coveralls up and running, not only will it tell you what percent of your code you are actually testing, but through its site you can zoom in and easily see what parts of your code you are missing with your tests. This is a great tool for checking to make sure you are covering all cases with your tests.

Coveralls can be integrated with travis to run once travis finishes with its tests. Like travis, coveralls comes with another pretty badge to show the world what percent of your code has been tested.

Coverage Status


Lastly, there’s codacy, another great tool that is free for open source software projects. Like coveralls and travis, codacy works with github repositories (and many others) and looks for commits to your repository. Upon new commits, it regrades your coding and provides a report card on how your code stacks up in several categories. These categories are

  • Code Complexity
  • Code Style
  • Compatibility
  • Documentation
  • Error Prone
  • Performance
  • Security
  • Unused Code

If any of factors are less than perfect, you can zoom in and it provides details on how to improve your code. Like all great tools, Codacy makes pretty badges to show off your skills.

Codacy Badge

###Other Badges

I hope you find the above tools useful and implement them in your own projects. If you are looking for other ways to decorate your repository README files, you should check out to find other useful badges, or create your own.