Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing framework discussion #217

Open
jrepp opened this issue Mar 1, 2022 · 1 comment
Open

Testing framework discussion #217

jrepp opened this issue Mar 1, 2022 · 1 comment

Comments

@jrepp
Copy link
Contributor

jrepp commented Mar 1, 2022

I'd like to share a small project I did recently to add automated testing and code coverage to an old GitHub project. In this case the author had archived their project so I forked the project and added the testing capabilities and API extensions.

https://github.com/jrepp/vec/blob/master/.github/workflows/cmake.yml

These workflow actions can be added through the 'Actions' tab by selecting CMake. A matrix build can be configured which allows automating the matrix of the platform, compiler, and library combination.

You can see some sample output here:
https://github.com/jrepp/vec/runs/5238299537?check_suite_focus=true

Using gcov + gcovr you get automated coverage analysis of your tests through https://about.codecov.io/. gcov is the standardized tool and gcovr is a nice python wrapper that automates a lot of the command line arguments and reporting.

In this case, I was upgrading an existing library that had a test case. The test macros were actually very simple and easy to use.

You can see the CMake configuration here:
https://github.com/jrepp/vec/blob/bda7a5da6f1650f6cd4a6f43e73cbb3c30c70e74/CMakeLists.txt#L56

What I do for more complex projects C/C++ (like AprilTag) is that I create multiple CTest targets using uTest (a header-only testing framework)
https://github.com/sheredom/utest.h

Pros: you get small executables, if your test crashes it's in an isolated executable and easier to isolate and reproduce
Cons: slightly longer build times

With a test/ directory and a CMakeLists.txt file in that directory you can create multiple test executables using a macro like this (cribbed from an embedded network test suite I'm working on):

macro(network_test name)
    set(sources ${ARGN})
    set(test_name "test_${name}")
    message("configuring ${test_name} with sources ${sources}")

    # Use -Wall for clang and gcc.
    if(NOT CMAKE_CXX_FLAGS MATCHES "-Wall")
        set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall")
    endif(NOT CMAKE_CXX_FLAGS MATCHES "-Wall")

    # Use -Wextra for clang and gcc.
    if(NOT CMAKE_CXX_FLAGS MATCHES "-Wextra")
        set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wextra")
    endif(NOT CMAKE_CXX_FLAGS MATCHES "-Wextra")

    # Use -Werror for clang and gcc.
    if(NOT CMAKE_CXX_FLAGS MATCHES "-Werror")
        set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror")
    endif(NOT CMAKE_CXX_FLAGS MATCHES "-Werror")
    
    # Disable C++ exceptions.
    string(REGEX REPLACE "-fexceptions" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-exceptions")
    
    # Disable RTTI.
    string(REGEX REPLACE "-frtti" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-rtti")

    add_executable(${test_name} ${sources})
    add_test(${name} ${test_name})
    target_link_libraries(${test_name} hal net proto mbedcrypto mbedtls mbedx509 llhttp nanolib Threads::Threads)
    target_include_directories(${test_name}
            PRIVATE
            ../../vendor/mbedtls/include
            ../../vendor/llhttp/include
            ../../vendor/nanomq/nanomq/include/
            ../../vendor/nanomq/nanomq/nanolib/include
            ../../vendor/nanomq/nng/)
endmacro()

network_test(mqtt test_mqtt.cpp test_common.cpp)
network_test(http test_http.cpp test_common.cpp)
...

There are some unique challenges to automated testing of AT. I'm excited to see where this effort leads.

@christian-rauch
Copy link
Collaborator

I added some test cases in #314 to check that we still achieve the same detections after changes.

This does not cover all variations of input and therefore something like #72 can still happen on new data if the input is not checked properly. If you are up for it, adding unit tests for individual functions would be appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants