I’m happy to share a real open source project that I’ve been working on: KUnit. The source code is available on GitHub: https://github.com/onerobotics/KUnit.
Coming from a Ruby background where automated unit testing is common-practice, it’s frustrating to come work on a robot where tools for automated testing don’t exist. No matter how careful I am when making changes to my code, I always feel a little bit exposed without a comprehensive set of unit tests making sure I didn’t break something. I haven’t come up with a solution for automating TP program testing, but here’s a tool you can use to make sure your KAREL programs are doing what they are supposed to do.
Here’s an example that tests a simple routine that adds two
PROGRAM test_add_int -- %NOLOCKGROUP is required to run KAREL from browser %NOLOCKGROUP -- %INCLUDE the KUnit routines %INCLUDE kunit.h -- the ROUTINE under test ROUTINE add_int(l : INTEGER; r : INTEGER) : INTEGER BEGIN RETURN(l + r) END add_int -- 1 + 1 = 2 ROUTINE test_11 : BOOLEAN BEGIN RETURN(kunit_eq_int(2, add_int(1,1))) END test_11 -- 2 + 2 = 4 ROUTINE test_22 : BOOLEAN BEGIN RETURN(kunit_eq_int(4, add_int(2,2))) END test_22 -- 0 + 0 = 0 ROUTINE test_00 : BOOLEAN BEGIN RETURN(kunit_eq_int(0, add_int(0,0))) END test_00 BEGIN -- initialize KUnit kunit_init -- do some tests kunit_test('1+1=2', test_11) kunit_test('2+2=4', test_22) kunit_test('0+0=0', test_00) -- output the test suite results kunit_output END test_add_int
Once you translate the program and load it onto your robot, you can run
the program and see the test output in your browser
KUnit ... Finished in 0.002 seconds 1500.0 tests/sec, 1500.0 assertions/sec 3 tests, 3 assertions, 0 failures
Let’s break the routine:
ROUTINE add_int(l : INTEGER; r : INTEGER) : INTEGER BEGIN -- this is wrong RETURN(l - r) END add_int
When we run the tests again we see:
KUnit FF. Finished in 0.002 seconds 1500.0 tests/sec, 1500.0 assertions/sec 3 tests, 3 assertions, 2 failures 1) Failure: 1+1=2 Expected 2 but got 0 2) Failure: 2+2=4 Expected 4 but got 0
KUnit provides some useful feedback about which of our tests failed and why, helping us track down which part of our code is broken and how to fix it.
KUnit works through the
kunit_test() routine and its assertions.
kunit_test() takes two arguments: the first is just a
describes what you are testing, and the second is a
BOOLEAN result. If
BOOLEAN result is
true, the test passes. If the result is
false, the test fails. If you are using the KUnit assertions, you’ll
get some useful information about why the assertion failed.
Check the README for a list of KUnit’s current assertions. KUnit actually uses KUnit to test itself. I can tell you’re thinking “Dude, that’s so meta!” Those tests are a little awkward, but check out the strlib tests for a more useful example.
My basic workflow for testing a program is:
mylib.h.kl. This is similar to a C header file that simply provides an interface to
mylib’s public routines.
test_mylib.kland set it up to for both KUnit and
- Write a test for what you want the code to do
- Run your test suite and see the test fail
- Write the smallest amount of code to make the test pass
- Refactor as necessary
- Go to step 4
If you thoroughly test your KAREL programs, you can be confident that 1) they do what they’re supposed to do, and 2) they continue to do what they’re supposed to do when you make changes to the code.