Unit tests
Philosophy
Unit tests focus on verifying the functionality of individual units or components of our software. A unit would be the smallest testable part of a software, such as a function, method or class. Unit tests in Fluid Attacks must be:
- Repeatable: Regardless of where they are executed, the result must be the same.
- Fast: Unit tests should take little time to execute because, being the first level of testing, where you have isolated functions/methods and classes, the answers should be immediate. A unit test should take at most two (2) seconds.
- Independent: The functions or classes to be tested should be isolated, no secondary effect behaviors should be validated, and, if possible, we should avoid calls to external resources such as databases; for this, we use mocks.
- Descriptive: For any developer, it should be evident what is being tested in the unit test, what the result should be, and in case of an error, what is the source of the error.
Architecture
- Location: To be discover by the testing framework, test files must be
located next to the file to be tested with the
_test
suffix and every test method must start with thetest_
prefix. Take a look at add group tests for reference. - Utilities: Some utilities are added to simplify tasks like to populate database, mock comfortably and include test files. It allows developers to focus on actually testing the code.
- Coverage: Coverage is a module-scoped integer between 0 and 100.
Current coverage for a given module con be found at
<module-path>/coverage
. For example, api/coverage.
Writing tests
See the following examples to understand how to write tests:
@mocks
decorator allows to populate the database with test data using
aws
parameter.
A clean database will be created and populated for each parameter provided
via @utils.parametrize
.
We got deeper into this decorators and helper methods in the next sections.
DynamoDB
Integrates database is populated using IntegratesAws.dynamodb
in the
@mocks
decorator. This parameter is an instance of
IntegratesDynamodb
, a helper class to populate the main tables
with valid data:
In the example above, we are populating the database with one organization, two stakeholders, and giving access to both stakeholders to the Organization with a different role.
Every faker is a fake data generator for one element. Parameters are optional
to modify your data for your tests (e.g., assigned role in
OrganizationAccessStateFaker
or the email in StakeholderFaker
).
Faker name gives a hint about where it should be used in the
IntegratesDynamodb
parameters.
The others
parameter is a way to list all the startup mocks that you require
in your test. In the example above, we are mocking the cloudwatch_log
function
from logs_utils
module to avoid to call CloudWatch directly and return always
a None value. Mock
is a helper class that creates a mock based on module,
function or variable name, mode (sync or async), and a return value.
This declarative approach ensures isolation. Each test will have its own data and will not conflict with other tests.
S3
Integrates buckets are created for testing at the same time
when @mocks
is called and no more actions are required. You can use
the buckets in your tests and, also, load files to buckets automatically before
every test run.
To load files to the buckets automatically, you must use:
Use the following file structure as reference:
- main.py (logic here)
- main_test.py (tests here)
Directorytest_data/
Directorytest_name_1/
- file_1.txt (It won’t be loaded)
- file_2.txt (It won’t be loaded)
Directorytest_name_2/
Directoryintegrates.dev/
- README.md (Loaded to integrates.dev bucket)
Directorytest_name_3/
Directoryintegrates/
- README.md (Loaded to integrates bucket)
<test_name>
directory is searched to load files into the corresponding
buckets. For example, a README.md file will be loaded into integrates.dev
for test_name_2, and a different README.md file will be loaded into
integrates for test_name_3. This approach ensures both isolation and
simplicity in the tests.
Utils
For easy testing, some utilities and decorators are provided.
Use @parametrize
to include several test cases:
Use raises
to handle errors during tests:
Use get_file_abs_path
to get the file’s absolute path in the
test_data/<test_name>
directory:
Use @freeze_time
when you want to set the execution time
(time-based features).
Running tests
You can run tests for specific modules with the following command:
where:
<module>
is required and can be any Integrates module.[test-n]
is optional and can be any test within that module.
If no specific tests are provided, this command will:
- Run all tests for the given module.
- Fail if any of the tests fail.
- Generate a coverage report.
- Fail if the new coverage is below the current one for the given module (Developer must add tests to at least keep the same coverage).
- Fail if the new coverage is above the current one for the given module (Developer must add new coverage to their commit).
- Pass if new and current coverage are the same.
If specific tests are provided, this command will:
- Only run the given tests.
- Fail if any of the provided tests fail.
- Skip the coverage report generation and evaluation.
Old unit tests
You can run tests using the following command:
To run the ones that modify the mock database:
Currently, every time our unit tests run, we launch a mock stack that is populated with the necessary data required for our tests to execute. We utilize mocking to prevent race conditions and dependencies within the tests.
When writing unit tests, you can follow these steps to ensure that the test is repeatable, fast, independent, and descriptive:
- Test file:
We store our tests using the same structure
as our repository. Inside
universe/integrates/back/test/unit/src
you can find our unit tests. Look for thetest_module_to_test.py
file or add it if missing. - Write the test: Once the file is ready, you can start writing the test. Consider the purpose of the function, method, or class that you want to test. Think about its behavior when different inputs are provided. Also, identify extreme scenarios to test within the test. These will form our test cases and are important for writing our assertions. We use the parametrize decorator if possible to declare different test cases.
- Mocks:
What do you mock? A general guideline is to look for the
await
statement inside the function, method or class that you want to test. In most cases,await
indicates that the awaited function requires an external resource, such as a database. To learn more about mocks, you can refer to the official documentation. - Mock data:
When using mocks, you need to provide the data required for
your unit test to run. We accomplish this by using
pytest fixtures
, which allow us to have mock data available fromconftest.py
files. - Assertions: Test the expected behavior. We use assertions to validate results, the number of function or mock calls, and the arguments used in mocks.