Basic Python testing setup with pytest
Recently, pytest has become the de-facto standard
Python testing framework. Compared to unittest
or nose
, pytest assumes
a different philosophy. There's no more test cases, setUp/tearDown, specific
asserts and all the boilerplate. pytest
tries to keep things simple by using
plain functions and make writing tests very straightforward.
I have been using pytest
for a while now and I quite like it. I use it
for all new projects. It's worth noting that it maintains compatibility with
unittest
and nose
test suites which makes the migration easy -- I recommend
giving it a go!
In this post, we'll take a look at how to get started with pytest
in a
Python project.
Versions used
For reference, we'll be using Python 3.6.6
and pytest 3.6.3
.
You can install Python using your favorite package manager or from Python website.
I recommend installing all Python packages (including pytest) through pip.
There's also an excellent guide on installing Python and pip on the Hitchhiker's Guide to Python.
Initial structure
Let's start with a very simple project with a few files. The directory structure we're going to start with is a plain Python code repository as follows:
$ exa -TF code/pytest-basic-setup
code/pytest-basic-setup/
├── pyproject/
│ ├── __init__.py
│ └── calculations/
│ ├── __init__.py
│ └── running.py
└── README.md
Our pyproject
so far contains basic calculation functions in
calculations/running.py
:
def running_sum(numbers):
current = 0
for num in numbers:
current += num
yield current
def running_mean(numbers):
for length, sum_so_far in enumerate(running_sum(numbers), start=1):
mean = sum_so_far / length
yield mean
We can execute these functions from the Python interpreter:
(Note how we're using
list()
to get a list of numbers, we need to do that because the functions are "lazy" generator functions -- usingyield
statement).
>>> list(running_sum([1, 2, 3]))
[1, 3, 6]
>>> list(running_mean([10, 20, 60]))
[10.0, 15.0, 30.0]
Adding basic tests using doctests
Note how these examples we've run make nice, concise tests as well as code examples. They make perfect doctests.
doctests
are runnable code snippets included in docstrings. They work so that
code prefixed with >>>
is executed in the interpreter, and expected outputs
are captured below (as strings). Good news is that pytest
supports them out
of the box!
Let's modify the source code to include them right in the source along with basic documentation.
def running_sum(numbers):
"""
Lazily generate a running sum of an iterable of numbers.
>>> list(running_sum([1, 2, 3]))
[1, 3, 6]
"""
current = 0
for num in numbers:
current += num
yield current
def running_mean(numbers):
"""
Lazily generate a running mean of an iterable of numbers.
>>> list(running_mean([10, 20, 60]))
[10.0, 15.0, 30.0]
"""
for length, sum_so_far in enumerate(running_sum(numbers), start=1):
mean = sum_so_far / length
yield mean
Now, we can run pytest
on our module (note we need to use --doctest-modules
flag):
$ pytest running.py --doctest-modules
=========================== test session starts ============================
platform linux -- Python 3.6.6, pytest-3.6.3, py-1.5.4, pluggy-0.6.0
rootdir: /code/pytest-basic-setup/pyproject/calculations, inifile:
collected 2 items
running.py .. [100%]
========================= 2 passed in 0.01 seconds =========================
Success! Now, let's look at adding more involved tests in dedicated test modules.
Adding test modules -- structure
Adding test code, we're faced with an important choice. Namely, where to put the test code. I'd say, there's two main conventions:
-
Put all the tests in a top-level
test
ortests
directory.
This is a good choice if you're planning to package your project (e.g. as a PyPI package) and share it with others. Your users might not be interested in running tests and the test code, just consuming the end-user code.$ exa -TF code/pytest-basic-setup-tests-toplevel pytest-basic-setup-tests-toplevel/ ├── pyproject/ │ ├── __init__.py │ └── calculations/ │ ├── __init__.py │ └── running.py ├── README.md └── tests/ # <- all test modules go here ├── __init__.py └── test_calculations/ ├── __init__.py └── test_running.py # <- "test_" prefix indicates a test module
-
Put tests along with individual modules / packages.
This might be a better choice if you're not packaging your project and everyone using it always clones the whole repo, including tests.$ exa -TF code/pytest-basic-setup-tests-alongside pytest-basic-setup-tests-alongside/ ├── pyproject/ │ ├── __init__.py │ └── calculations/ │ ├── __init__.py │ ├── running.py │ └── tests/ # <- tests right alongside code │ ├── __init__.py │ └── test_running.py └── README.md
Good news is that pytest
supports pretty much any convention, as long
as you stick to
prefixing everything with test_
.
Properly configured, it will discover every single test in your codebase.
I recommend making it possible to run all tests from the toplevel project
directory just typing in pytest
on the commandline. This encourages users
to run tests, taking the guesswork out of "how do I run tests?"
and also makes CI/CD easier to set up.
We're going to use the first (top-level tests
dir) approach in our further
examples.
Asserting with pytest
pytest makes it very easy to write test assertions.
It modifies the built-in assert
keyword so that it's aware of constructs
like in
, is
, >=
etc.
In other words, the need to use a specific assert function
(e.g. unittest's assertEqual
, assertIs
or nose's snake case
equivalents e.g. assert_in
) is very rare.
While "plain" assert
statements are fine for testing, the specialized
versions give much more context to anyone reasoning about the test or
debugging its failure. So good practice is to make assertions as specific
as possible -- and pytest does that for us.
Without further ado, let's write a few tests for the running_sum
function.
$ exa -TF code/pytest-basic-setup-tests-toplevel
pytest-basic-setup-tests-toplevel/
├── pyproject/
│ ├── __init__.py
│ └── calculations/
│ ├── __init__.py
│ └── running.py
├── README.md
└── tests/
├── __init__.py
└── test_calculations/
├── __init__.py
└── test_running.py # <- we'll be modifying this file
from pyproject.calculations.running import running_sum
def test_running_sum_empty_input():
assert list(running_sum([])) == []
def test_running_sum_range_100():
numbers = range(100)
result_list = list(running_sum(numbers))
assert len(result_list) == len(numbers)
assert result_list[0] == 0
assert result_list[-1] == sum(numbers)
assert result_list[50] == sum(numbers[:50])
Running these tests gives the following output:
$ pytest test_running.py
========================== test session starts ===========================
platform linux -- Python 3.6.6, pytest-3.6.3, py-1.5.4, pluggy-0.6.0
rootdir: /code/pytest-basic-setup-tests-toplevel/tests/test_calculations, inifile:
collected 2 items
test_running.py .F [100%]
================================ FAILURES ================================
_______________________ test_running_sum_range_100 _______________________
def test_running_sum_range_100():
numbers = range(100)
result_list = list(running_sum(numbers))
assert len(result_list) == len(numbers)
assert result_list[0] == 0
assert result_list[-1] == sum(numbers)
> assert result_list[50] == sum(numbers[:50])
E assert 1275 == 1225
E + where 1225 = sum(range(0, 50))
test_running.py:16: AssertionError
=================== 1 failed, 1 passed in 0.02 seconds ===================
Oops! We've made a classic off-by-one mistake here. We're comparing
51st result element against the sum of 50 numbers. Note how useful and human
readable the test failure is -- we see the values being compared with ==
below the line that failed. Fixing this to:
def test_running_sum_range_100():
numbers = range(100)
result_list = list(running_sum(numbers))
assert len(result_list) == len(numbers)
assert result_list[0] == 0
assert result_list[-1] == sum(numbers)
assert result_list[49] == sum(numbers[:50]) # FIXED: result_list[49]
Makes our tests pass:
$ pytest test_running.py -q
.. [100%]
2 passed in 0.01 seconds
More involved asserts
Let's add two more tests for our function -- one for incompatible types, one for float values.
We can check that a piece of code raises an expected exception
using pytest.raises
and we can use pytest.approx
to test for approximate equality. This is good practice for testing floating
point values due to their
limited precision in hardware.
from pytest import raises
from pyproject.calculations.running import running_sum
def test_running_sum_empty_input():
assert list(running_sum([])) == []
def test_running_sum_range_100():
numbers = range(100)
result_list = list(running_sum(numbers))
assert len(result_list) == len(numbers)
assert result_list[0] == 0
assert result_list[-1] == sum(numbers)
assert result_list[49] == sum(numbers[:50])
def test_running_sum_invalid_types():
invalid_input_str = [1, 2, "42"]
invalid_input_list = [[1], 2, 3]
with raises(TypeError):
list(running_sum(invalid_input_str))
with raises(TypeError):
list(running_sum(invalid_input_list))
def test_running_sum_floats():
# NOTE: direct float comparison (no approx)
assert list(running_sum([1.0, 3.11, 5.33])) == [1.0, 4.11, 9.44]
We deliberately skipped approx
in test_running_sum_floats
-- let's see how the failure looks like:
================================ FAILURES ================================
________________________ test_running_sum_floats _________________________
def test_running_sum_floats():
> assert list(running_sum([1.0, 3.11, 5.33])) == [1.0, 4.11, 9.44]
E assert [1.0, 4.109999999999999, 9.44] == [1.0, 4.11, 9.44]
E At index 1 diff: 4.109999999999999 != 4.11
E Use -v to get the full diff
test_running.py:33: AssertionError
=================== 1 failed, 3 passed in 0.02 seconds ===================
Again, pytest makes the error easy to diagnose -- it knows that the objects being compared are lists and indicates that the difference occurs at index 1.
We're comparing two float values that are almost, but not quite, equal. Such are the joys of working with floating point numbers.
For reference, outside of tests we'd use this sort of construct:
>>> from math import isclose
>>> 1.0 + 3.11 == 4.11
False
>>> isclose(1.0 + 3.11, 4.11)
True
But in pytest, approx
gets the job done (and we can use it on lists and
other structures that contain floats):
from pytest import approx
def test_running_sum_floats():
# FIXED: using pytest.approx for float comparison
assert list(running_sum([1.0, 3.11, 5.33])) == approx([1.0, 4.11, 9.44])
With that, we get a clean test pass:
$ pytest test_running.py -v
========================== test session starts ===========================
platform linux -- Python 3.6.6, pytest-3.6.3, py-1.5.4, pluggy-0.6.0 -- /usr/bin/python
cachedir: .pytest_cache
rootdir: /code/pytest-basic-setup-tests-toplevel/tests/test_calculations, inifile:
collected 4 items
test_running.py::test_running_sum_empty_input PASSED [ 25%]
test_running.py::test_running_sum_range_100 PASSED [ 50%]
test_running.py::test_running_sum_invalid_types PASSED [ 75%]
test_running.py::test_running_sum_floats PASSED [100%]
======================== 4 passed in 0.09 seconds ========================
Putting it all together
We have achieved quite good test coverage for running_sum
.
However, running_mean
needs some more work -- we'll leave that as an exercise
for the reader.
Let's now make sure that running pytest
in the top level dir runs all the
tests we have written, including doctests.
pytest can be configured easily -- we can drop the configuration in
setup.cfg
file.
This file is the de-facto Python standard for configuring all sorts of tools in
one place and pytest has
support for it.
What we're trying to achieve is to make sure pytest always runs all doctests,
aka. the --doctest-modules
flag. It can be made the default by adding
setup.cfg
with the following content:
[tool:pytest]
addopts = --doctest-modules
The final structure we've ended up with is therefore as follows:
$ exa -TF
./
├── pyproject/
│ ├── __init__.py
│ └── calculations/
│ ├── __init__.py
│ └── running.py
├── README.md
├── setup.cfg
└── tests/
├── __init__.py
└── test_calculations/
├── __init__.py
└── test_running.py
Now, all tests are run as expected (also note that pytest indicates the inifile
it's using and the dir is being run from):
$ pytest
========================== test session starts ===========================
platform linux -- Python 3.6.6, pytest-3.6.3, py-1.5.4, pluggy-0.6.0
rootdir: /code/pytest-basic-setup-tests-toplevel, inifile: setup.cfg
collected 6 items
pyproject/calculations/running.py .. [ 33%]
tests/test_calculations/test_running.py .... [100%]
======================== 6 passed in 0.03 seconds =======================
Further reading
That's all for today!
The full, final code for this post is available at github.
If you're interested in pytest and testing Python overall, please check out the links below.
- pytest documentation is quite comprehensive and it has a section on doctests.
- Python documentation for doctest module describes doctests in great detail, with many examples.
- Testing Your Code section
in the Hitchhiker's Guide to Python provides a concise introduction to testing
overall and good test practices. It also lists the most popular Python test
tools and modules (I still recommend going with
pytest
!). - The fastats project I contributed to
has many tests, including doctests and uses
pytest
to run them all.