Chapter 8 tinytest in R

In R, there are multiple packages to define unit tests or to conduct advanced testing procedures. One of the easiest to use is packages is tinytest: Lightweight and Feature Complete Unit Testing Framework.

8.1 Loading package

To use this minimalistic approach with the tinytest package, you may need to install once (install.packages("tinytest")).

Next, you need to load the package:

# Loading the package to be able to use it
library("tinytest")

We can define tests in our R script file, but it is better to define them in a dedicated file, which we may run calling function run_test_file().

8.2 Test case

Let us take a closer look at definition, interpretation and result of a test case in a very simple example.

Test case definition

A test case in tinytest is just a function call. There are different functions, which will be introduced here by example.

Let us start with the same example as in the previous chapter.

expect_equal(1 + 2, 3, info = "testing addition operator")

In this simple test case, we have a call of the expect_equal(), which takes two statements and compares them. Additionally, we can put a comment to be displayed if the test case fails.

Test case interpretation

In our example, we expect that 1 + 2 is equal to 3.

We always compare two objects. The objects may be given directly as parameters or as statements to be executed. Their order does not really matter for the function result. However, it is better to keep it consistent. The messages assume that the first argument as an object under test (current object) and the second one as expected result (target object).

Next the comparison of obtained and expected result is done.

In our example, 3 (as the obtained result from 1 + 2) will be compared with 3 (as the expected result). As they are equal, the test case will be OK (pass).

If we have a test case in our file, it will be executed, if it is in another file, we can execute all test cases using run_test_file() function.

Test case result / report

The tinytest will show a report for a single test case or multiple test cases.

In our example, the test case was OK (passed), as it can be seen in the below report.

## ----- PASSED      : <-->
##  call| expect_equal(1 + 2, 3, info = "testing addition operator")
##  diff| NA

If we supposed to use addition in our statement under test, but we made a typo and used - instead of +, our test case will fail. Additionally, we will obtain information the the problem was with data, what was the difference and the info that we used in the function call.

expect_equal(1 - 2, 3, info = "testing addition operator, wrong expectation")
## ----- FAILED[data]: <-->
##  call| expect_equal(1 - 2, 3, info = "testing addition operator, wrong expectation")
##  diff| Expected 3, got -1
##  info| testing addition operator, wrong expectation

It is also possible that we put an incorrect value as expected result and in this case our test case will fail. Therefore, it is important to double-check expected results.

expect_equal(1 + 2, 4, info = "testing addition operator, wrong expectation")
## ----- FAILED[data]: <-->
##  call| expect_equal(1 + 2, 4, info = "testing addition operator, wrong expectation")
##  diff| Expected 4, got 3
##  info| testing addition operator, wrong expectation

For multiple test cases a summary can be shown as well, containing a number of failed test cases and the total number of test cases. Interactive reports are also shown by integrated development environments, e.g. Rstudio, where you can click on the test case and jump to the its definition.

8.3 Example test cases

In this section, we will use very basic R statements to focus on the testing mechanism. Again, you do not need to test built-in functionality, which was used here to have possibly simplest examples. In the next section, we will have examples of testing a user-defined function.

Usage of variables

When testing, we can use any variables that are visible, when the expect_equal() function is called.

a <- 1 
b <- 2
expect_equal(a + b, 3, info = "testing addition operator on variables")
## ----- PASSED      : <-->
##  call| expect_equal(a + b, 3, info = "testing addition operator on variables")
##  diff| NA

Levels of comparisons

So far we used expect_equal(), but in R objects can be compared at different levels of strictness. Therefore, we can use expect_equivalent(), which is less restrictive and ignores attributes, or we can use expect_identical(), which is the most strict test and compares all aspect of object including where they are stored.

a <- 1 
b <- c(a = 1)
expect_equivalent(a, b)
## ----- PASSED      : <-->
##  call| expect_equivalent(a, b)
expect_equal(a, b)
## ----- FAILED[data]: <-->
##  call| expect_equal(a, b)
##  diff| Expected 1, got 1

As we can see, expect_equivalent() ignored the attribute, but expect_equal() didn’t.

Moreover, expect_equal() has some tolerance when comparing, whereas expect_identical() does not.

expect_equal(1e-10, 0)
## ----- PASSED      : <-->
##  call| expect_equal(1e-10, 0)
expect_equal(1e-10, 0, tolerance = 1e-12)
## ----- FAILED[data]: <-->
##  call| expect_equal(1e-10, 0, tolerance = 1e-12)
##  diff| Expected 0, got 1e-10
expect_identical(1e-10, 0)
## ----- FAILED[data]: <-->
##  call| expect_identical(1e-10, 0)
##  diff| Expected 0, got 1e-10

Even if we save the same object in different locations in memory (env = environment), the two objects will be not identical.

a <- new.env() 
a$x <- 1
b <- new.env() 
b$x <- a$x 
expect_equal(a, b)
## ----- PASSED      : <-->
##  call| expect_equal(a, b)
expect_identical(a, b)
## ----- FAILED[attr]: <-->
##  call| expect_identical(a, b)
##  diff| TRUE

Special comparisons

There a lot of test cases are when a result is expected to be TRUE, FALSE or NULL. Obviously, we could use expect_equal() and write use TRUE, FALSE or NULL as the second argument. To make it more convenient, there are expect_true(), expect_false(), expect_null() functions.

a <- TRUE
expect_true(a)
## ----- PASSED      : <-->
##  call| expect_true(a)
expect_false(a)
## ----- FAILED[data]: <-->
##  call| expect_false(a)
##  diff| Expected FALSE, got TRUE
A <- list(a = 1, b = 2)
expect_null(A$c)
## ----- PASSED      : <-->
##  call| expect_null(A$c)

Calling of a function

Obviously, in the test, we can call any visible function, for example a built-in function:

expect_equal(as.numeric('32'), 32, info = "testing of a function")
## ----- PASSED      : <-->
##  call| expect_equal(as.numeric("32"), 32, info = "testing of a function")
##  diff| NA

In this example, it is worth noticing that comparison takes also type of data into account. If the types of target (expected) and current (under test) objects do not match, the test case will fail.

expect_equal(as.numeric('32'), '32', info = "testing of a function")
## ----- FAILED[data]: <-->
##  call| expect_equal(as.numeric("32"), "32", info = "testing of a function")
##  diff| Modes: character, numeric
##  diff| target is character, current is numeric
##  info| testing of a function

Expecting an error

We can also test if a function will cause an error, stop and print a message.

expect_error(stop("something went wrong"))
## ----- PASSED      : <-->
##  call| expect_error(stop("something went wrong"))

Moreover, we can check if the error message was as we expected.

expect_error(stop("something went wrong"), pattern = "something")
## ----- PASSED      : <-->
##  call| expect_error(stop("something went wrong"), pattern = "something")

If the message pattern differs from the expected pattern, the test case will fail because of mismatch of the error / exception message (FAIL[xcpt]).

expect_error(stop("something went wrong"), pattern = "nothing")
## ----- FAILED[xcpt]: <-->
##  call| expect_error(stop("something went wrong"), pattern = "nothing")
##  diff| The error message:
##  diff| 'something went wrong'
##  diff| does not match pattern 'nothing'

Also if we expect an error, but there is none, the test case will fail.

expect_error(1/0)
## ----- FAILED[xcpt]: <-->
##  call| expect_error(1/0)
##  diff| No error

We can also expect_silent() (no error), expect_warning() or expect_message().

8.4 Test driven development by example

Now let us take a more sophisticated example and define a conversion function from meters to feet. This function should calculate feet based on a distance given in meters. To understand the development process of this function, we will do it step by step.

Core functionality

We can start with a functional test and empty function.

library("tinytest")
meters2feet <- function(x) {}  # initial function definition 
expect_equal(meters2feet(1), 3.28084) # the first test case 
## ----- FAILED[data]: <-->
##  call| expect_equal(meters2feet(1), 3.28084)
##  diff| Modes: numeric, NULL
##  diff| Lengths: 1, 0
##  diff| target is numeric, current is NULL

The function will fail the test case (1/1).

We can just return a hard coded number (in most cases, a very bad practice!) to pass the test.

meters2feet <- function(x) { return(3.28084) }  
expect_equal(meters2feet(1), 3.28084) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(1), 3.28084)

Obviously, the function will pass the test case now (0/1). We are obviously not done. Our function is rather useless and having one test case is usually not enough. Let’s add more functional tests and include a calculation.

meters2feet <- function(x) { return(3.28084 * x) }   
expect_equal(meters2feet(0), 0) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(0), 0)
expect_equal(meters2feet(1/3.28084), 1) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(1/3.28084), 1)
expect_equal(meters2feet(1), 3.28084) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(1), 3.28084)

It seems that we are doing well (0/3).

Error handling

If we want to have domain-specific messages for wrong arguments of our function, we need to define error handling.

We will start with an invalid value, defining both a test case and extending the function.

meters2feet <- function(x) {
    if (x < 0) {
        stop("The distance must be a non-negative number.")
    }
    return(3.28084 * x)
}  
expect_error(meters2feet(-0.1)) 
## ----- PASSED      : <-->
##  call| expect_error(meters2feet(-0.1))
expect_equal(meters2feet(0), 0) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(0), 0)
expect_equal(meters2feet(1/3.28084), 1) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(1/3.28084), 1)
expect_equal(meters2feet(1), 3.28084) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(1), 3.28084)

Now, we pass all test cases again (0/4 failed).

Now, we will deal with the wrong type. We will accept both numeric values only.

meters2feet <- function(x) {
    if (x < 0) {
        stop("The distance must be a non-negative number.")
    }
    if (!is.numeric(x)) {
        stop("The distance must be a number.") 
    }
    return(3.28084 * x)
}  
expect_error(meters2feet("1")) 
## ----- PASSED      : <-->
##  call| expect_error(meters2feet("1"))
expect_error(meters2feet(-0.1)) 
## ----- PASSED      : <-->
##  call| expect_error(meters2feet(-0.1))
expect_equal(meters2feet(0), 0) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(0), 0)
expect_equal(meters2feet(1/3.28084), 1) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(1/3.28084), 1)
expect_equal(meters2feet(1), 3.28084) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(1), 3.28084)

Our function now fails in one test case (1/5). Any idea why?

The order should be always, at first checking types, next values (content). For the final solution we need to swap the order in the function definition.

Final solution

The following definition we can consider as the final solution for this example (0/6). If you would like to extend it, you could add a test case and modify the function.

library("tinytest")
meters2feet <- function(x) {
    if (!is.numeric(x)) {
        stop("The distance must be a number.") 
    }
    if (x < 0) {
        stop("The distance must be a non-negative number.")
    }
    return(3.28084 * x)
}  

# if tests are in meters2feet_tinytest.R file, 
# you can just run the tests using run_test_file("meters2feet_tinytest.R")
expect_error(meters2feet("1")) 
## ----- PASSED      : <-->
##  call| expect_error(meters2feet("1"))
expect_error(meters2feet(-0.1)) 
## ----- PASSED      : <-->
##  call| expect_error(meters2feet(-0.1))
expect_equal(meters2feet(0), 0) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(0), 0)
expect_equal(meters2feet(1/3.28084), 1) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(1/3.28084), 1)
expect_equal(meters2feet(1), 3.28084) 
## ----- PASSED      : <-->
##  call| expect_equal(meters2feet(1), 3.28084)