unit testing: Looking for errors in a subsystem (class, object, or function) in isolation.
The basic idea:
Foo
, create another class FooTest
to test it, containing various “test case” methods to run.import pytest
import math
# Example function to test
def factorial(n):
if n < 0:
raise ValueError("Factorial not defined for negative numbers")
if n == 0:
return 1
else:
return n * factorial(n-1)
# Test for the example function
def test_factorial():
assert factorial(0) == 1
assert factorial(1) == 1
assert factorial(5) == 120
with pytest.raises(ValueError):
factorial(-1)
pytest
does not provide a large number of specific assertions. Instead, mostly use base assert
.
There are specific approaches for special cases, like checking that exceptions do occur:
The documentation covers others, such as checking warnings. Numpy also has assertions for data vectors.
import pytest
import pandas as pd
@pytest.fixture
def example_dataframe():
# Create a simple DataFrame for testing
data = {'col1': [1, 2, 3], 'col2': [4, 5, 6]}
df = pd.DataFrame(data)
return df
def test_dataframe_column_sum(example_dataframe):
# Test that the sum of 'col1' is correct
assert example_dataframe['col1'].sum() == 6
def test_dataframe_column_mean(example_dataframe):
# Test that the mean of 'col2' is correct
assert example_dataframe['col2'].mean() == 5.0
Torture tests are okay, but only in addition to simple tests.
Many products have a set of mandatory check-in tests that must pass before code can be added to a source code repository.
Imagine that we’d like to add a method subtractWeeks
to a Date
class, that shifts this Date backward in time by the given number of weeks.
seaborn
package changes the way it orders colors on a graphvirtualenv
requirements.txt
file
C = f(A, B)
, C > B
and C > A
hypothesis
is a python package for thisdatafuzz
can perform the data equivalent—i.e. add noise, etc, that shouldn’t influence the analysispythonfunctioniwrotemyselfonasundayafterwaytoomuchcoffee(argument = False)
pylint
is a linter for python code