Are Your Tests Enough? Measuring Coverage with Coverage.py

In the last post, we talked about why testing is vital for Python libraries and how pytest makes writing those tests easier. You might now have a growing suite of tests, and they all pass – fantastic! But how do you know if those tests are actually running through all the important logic in your library?

It’s surprisingly easy to write tests that look good but miss critical edge cases or entire code paths. Maybe that obscure if block handling a rare configuration hasn’t been triggered? Maybe the error handling for a specific exception was never tested? This is where code coverage comes in.

What is Code Coverage (and Why Care)?

Code coverage measures the percentage of your codebase (lines, branches, etc.) that is executed while your test suite runs. It tells you which parts of your library are being exercised by your tests and, more importantly, which parts are not.

For library developers, aiming for high code coverage offers several benefits:

  • Identifies Untested Code: The most obvious benefit! It pinpoints functions, classes, or specific lines/branches that lack test coverage, highlighting potential blind spots.
  • Increases Confidence: Higher coverage generally correlates with higher confidence that your code behaves as expected under various conditions tested.
  • Guides Test Writing: Coverage reports can guide you on where to focus your testing efforts next. If a critical module has low coverage, you know where to write more tests.
  • Highlights Dead Code: Sometimes, code that is never executed by tests might actually be unreachable or redundant (“dead code”) and could potentially be removed.

Important Note: High coverage doesn’t automatically mean your tests are good. You could execute a line of code without actually asserting its behavior is correct. Coverage is a valuable metric, but it’s not a substitute for thoughtful test design.

Measuring Coverage with coverage.py and pytest-cov

The standard tool for measuring code coverage in Python is coverage.py. While you can use it directly, it integrates seamlessly with pytest via the pytest-cov plugin, making it incredibly easy to use.

1. Installation:

If you haven’t already, install the plugin:

pip install pytest-cov

2. Running Tests with Coverage:

Now, you can run pytest with an added flag to specify which package or module you want to measure coverage for:

pytest --cov=my_library tests/

(Replace my_library with the actual name of your library’s package directory, and tests/ with your tests directory if it’s different).

3. Interpreting the Output:

After your tests run, pytest-cov will append a coverage report to the standard pytest output. It typically looks something like this:

---------- coverage: platform linux, python 3.10.12-final-0 ----------
Name                     Stmts   Miss  Cover
----------------------------------------------
my_library/__init__.py       1      0   100%
my_library/core.py          25      3    88%
my_library/utils.py         10      0   100%
----------------------------------------------
TOTAL                       36      3    92%

=========================== 15 passed in 0.12s ===========================
  • Stmts: Total number of executable statements in the file.
  • Miss: Number of statements not executed by your tests.
  • Cover: The percentage of statements covered ((Stmts - Miss) / Stmts).

In this example, my_library/core.py has 3 lines that weren’t executed by any tests, resulting in 88% coverage for that file and 92% overall.

4. Generating HTML Reports:

While the terminal report is useful, an HTML report provides a much more detailed, interactive view. You can see exactly which lines in each file were missed:

pytest --cov=my_library --cov-report=html tests/

This command creates an htmlcov/ directory in your project root. Open the index.html file inside it in your browser. You can click through files and see lines highlighted in green (covered) and red (missed).

Configuring Coverage in pyproject.toml

While command-line options work great for quick checks, you’ll probably want to set up some standard coverage settings for your project. The best place for this is your pyproject.toml file – the modern way to configure Python tools.

Here’s a practical example of configuring pytest-cov:

[tool.pytest.ini_options]
addopts = "--cov=my_library --cov-report=term-missing"
testpaths = ["tests"]

[tool.coverage.run]
branch = true              # Enable branch coverage measurement
source = ["my_library"]    # Only measure coverage for our library
omit = [                   # Files to exclude from coverage
    "tests/*",
    "my_library/__init__.py",
]

[tool.coverage.report]
exclude_lines = [
    "pragma: no cover",
    "def __repr__",
    "if __name__ == .__main__.:",
    "raise NotImplementedError",
    "raise ImportError",
    "except ImportError:",
]
fail_under = 85    # Fail if coverage drops below 85%
show_missing = true
skip_empty = true

Let’s break down what’s happening here:

  • The [tool.pytest.ini_options] section configures pytest itself:

    • addopts adds default command-line options (so you can just run pytest)
    • testpaths tells pytest where to look for tests
  • [tool.coverage.run] controls how coverage data is collected:

    • branch = true enables branch coverage (checking both if/else paths)
    • source specifies which package(s) to measure
    • omit lists files to exclude (like test files themselves)
  • [tool.coverage.report] configures the coverage report:

    • exclude_lines lists patterns for lines to ignore (common boilerplate)
    • fail_under sets a minimum coverage threshold
    • show_missing = true always shows which lines were missed
    • skip_empty = true ignores files with no executable code

With this configuration, you can just run pytest and get consistent coverage reporting every time. Plus, your CI pipeline will fail if coverage drops below your specified threshold – a great way to maintain code quality!

What’s a “Good” Coverage Percentage?

Aiming for 100% coverage can be tempting, but it often leads to diminishing returns, forcing you to write tests for trivial code paths just to hit the number. A more practical approach is:

  • Aim for high coverage (e.g., 85-95%+) on critical parts of your library.
  • Focus on understanding why lines are missed. Is it a genuinely untested scenario, or trivial boilerplate (like if __name__ == "__main__":)?
  • Use coverage as a tool to find gaps, not as the sole measure of test quality.

Measuring code coverage is a crucial step in building robust and reliable Python libraries. coverage.py and pytest-cov make it easy to integrate this check into your workflow, giving you valuable insights into how well your tests are exercising your code.

Next up, we’ll explore tox, a powerful tool for testing your library against multiple Python versions and environments – essential for ensuring broad compatibility.

Subscribe to the Newsletter

Get the latest posts and insights delivered straight to your inbox.