More Ado About Nothing

Last week I left off after writing a suite of tests which tested whether the parser failed when it encountered many different situations with timestamps. I left off expressing my intention to continue working on unit tests in the future, and so this week I decided to work towards that goal by starting on cue id unit tests. However, before I could start on this, valgrind testing was added to the test framework to help check for any hidden memory allocations made by the parser. Thus before I could start on new unit tests I needed to make sure the old ones would pass in valgrind.

Valgrind testing can be turned on using the following configure flag:

./configure --enable-valgrind-testing

Unfortunately, currently the parser throws a large number of errors during this testing which quite quickly can fill the terminal. To get around this it is a good idea to “pipe” the output of make check into a file so that all the messages are stored. A basic example of this is:

make check &> errorfile

Which will send the output of make check to a file labelled errorfile. I found unfortunately that valgrind failed every single one of my test suites and thus was forced to add them to the newly added section of the makefile for expected valgrind failures. An example of how this looks is:

# On TravisCI, we run valgrind for unit tests with --exit-exitcode=1,
# which means otherwise passing tests fail when valgrind fails leak-check.
# Tests added to KNOWN_VALGRIND_TEST_FAILURES will not cause the test run
# to fail. You can run tests with valgrind like so:
#
#   ./configure --enable-valgrind-testing
#   make && make check
KNOWN_VALGRIND_TEST_FAILURES =
if VALGRIND_TESTING
KNOWN_VALGRIND_TEST_FAILURES += \
	csgeneric_unittest \
	csline_unittest \
	cssize_unittest \
	csposition_unittest \
	csvertical_unittest \
	csalign_unittest \
	ctorder_unittest \
	ctseparator_unittest \
	cttimestampminute_unittest \
	cttimestampsecond_unittest \
	cttimestampsecondfrac_unittest
endif

Where each unit test expected to fail is added to the KNOWN_VALGRIND_TEST_FAILURES variable.

After I had completed runnign my cue timestamp tests through valgrind I was finally able to move forward with working on tests for cue ids.

The major difference I found between working on the cue timestamp and cue id tests was the amount of commenting that had been done on the tests. On the cue timestamp tests the writer (caitp) had left very detailed comments on the test which allowed for anyone to very easily pick them up and understand what they were meant to test. Unfortunately, some of the tests in cue id were not quite as well documented, so alot of time was spent trying to understand what they were testing, and whether they were actually correct to test. When I redid the commenting I made sure to try to match the commenting style that was present in the cue timestamp tests as closely as possible so I hope it will help future people who read the tests.

Also, when I did the unit tests for the cue timestamps I had forgotten to add assertions for both the line and column that the error was expected to be found in. This was brought up in class and I decided that I should make sure to include these for my cue id tests (as someone else had fixed them already). Taking the advice into account, an updated example of how a basic failed test should look like is:

//Updated Basic Fail Test
TEST_F(CueIdArrow,Arrow)
{
  loadVtt( "cue-ids/arrows/arrow.vtt" );
  const Error& err = getError( 0 );

  ASSERT_EQ( WEBVTT_ID_TRUNCATED, err.error() );
  ASSERT_EQ( 3, err.line() );
  ASSERT_EQ( 4, err.column() );
}

Which checks to make sure that the error occurred in the right line and column, ensuring that the correct character had failed.

This was also my first foray into creating tests expected to pass as I made sure to only do failing tests for cue timestamp. My current basic passing test looks like:

//Basic Pass Test
TEST_F(CueIdArrow,MalformedArrows)
{
  loadVtt( "cue-ids/arrows/malformed_arrows.vtt" );

  ASSERT_EQ( 0, errorCount() ) << "This file should contain no errors.";
}

Which basically asserts that no errors are flagged when the test is run. I expect that there should probably be a few other things added to this test, however for now it serves as a very basic base.

As with the tests I made for cue timestamps, I found a few unexpected errors were thrown during the execution of the test which included the previously encountered c++ bad_alloc error and even one test that was supposed to pass, but failed. I also ran these tests with valgrind enabled and found unfortunately, that they all failed them as well.

Moving forward I hope to continue adding and improving to the tests I’ve made as well as looking into errors that might have caused them.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s