Automated tests

Automated tests are an important component for maintaining the integrity and reliability of knowledge maps. By automating the testing process, authors can ensure that any changes or updates to the knowledge map do not adversely affect its behaviour. These tests can be configured to simulate various scenarios, verifying that the knowledge map produces consistent and expected results. With the ability to run tests against a draft version of the knowledge map, authors can identify and rectify issues before they impact the live system. Automated testing not only streamlines the quality assurance process but also significantly reduces the manual effort required to maintain the knowledge map's accuracy over time.

Setting up an automated test

When creating a new automated test we start by entering the query we want to test.

Enter the relationship name with either the subject or object instance (or both). Clicking Next will auto-populate the next step in the test. (Add the next step manually with the circular + button)

In the example below it displays a question being asked. You can type the test response and click next to auto-populate the next step again. (For plural answers click + under the answer to add more than one answer)

This can be repeated until you get to the result.

When configuring a question response, you can enter the subject, relationship, object and certainty factor to respond with (if a second form question) or a yes/no response in the event of a first form question being asked.

You can also choose to provide no response to a question (if the map is configured to allow this). This will be presented as 'Skip response' when the test is saved.

Note that you should always finish an automated test with an expected result; even if no response is expected, you can specify 'No result'.

Running an automated test

Once a test has been created it can be run at a later date to verify that the query runs as expected.

All tests can be run with the Run All button, or they can be run individually with the play button on the test itself.

There are two possible causes for a test to fail:

Unexpected question or result

If Rainbird returns a question or a result that wasn’t expected, then a test will fail. It’s worth noting that questions are expected in the order defined in the test and the it will fail if this is not the case.

If an unexpected result is received, the evidence tree can be accessed to view the chain of reasoning and validate if the result is correct or why it has produced an unexpected result.

HTTP failure: 400 bad request

The request may fail if the response is not consistent with how the knowledge map is configured, for example, responding with multiple answers to a question whose relationship is singular would result. Where the request doesn't fit the expected response it results in an API error of HTTP 400 Bad Request error is returned.

Exporting and importing automated tests

You can export automated tests as JSON. This allows for changes to be made easily in a text editor, making batch wording changes easier if a concept instance or a relationship has been renamed. Tests can then be imported back into the Rainbird Studio.

The export/import function also allows tests to be copied across to different knowledge maps and shared with other authors.

Best practice

Create tests that isolate specific areas of functionality. i.e. each time you add a rule to a relationship, you could create a test to isolate and test that specific rule.

There are two types of tests you can create:

Confirms specific result(s) are returned given a set of test data. This uses the inject functionality to provide all necessary data for the test to bypass any questions being asked, whilst being able to confirm the correct result is still given.

These are beneficial if you want to test the output, without testing the order or display of questions.

Last updated