A TwinCAT Unit Test Library

Assertion: All non-trivial algorithmic code should be contained in a library. Rationale: This allows reuse in future projects, thorough testing in a controlled context, and fine-grained control of revisions via version numbering.

Writing libraries requires that the code can be executed somehow as it is developed, as code rarely works first time. And code that lives in a library carries with it a not unreasonable expectation that it works correctly. The best way of achieving this is with unit tests running in some sort of framework.

To this end, I wrote a unit test library a few years ago (https://github.com/RedRockControls/SimpleUnitTestLibrary), and have used it since on many libraries and projects. However, I never found a satisfactory way of making units tests easily portable between the library and the project in which the library was to be used, and using the PLC visualization to show test results introduced too many dependencies. Hence the new library (https://github.com/RedRockControls/tcl_TwinCAT_UnitTestLibrary)

The aim of making unit tests easily portable is to allow the tests to be written and used in the library project during development, and in the client code when the library is to be actually used in a project.

Running the test in the library project means you are not continually creating the library and switching to another project to run the tests. And if the tests can run automatically on an online change to the code, then fast feedback is possible. This is especially useful when practicing Test Driven Development (TDD).

Running the same tests in the client project means that the library code is tested in the actual context in which it will be utilised – i.e the same hardware platform and libraries (ARM processors do not behave identically to x86 processors!)

Using The Library

Creating a unit test

A unit test can be any function block that extends the function block T_UnitTestBase. The test is defined as follows:

  • The Init method is overridden with initialisation code to be executed before the test code is executed
  • The RunTest method is overridden with the test code that executes the test and writes to the two output variables – TestFailed and TestCompleted

So a test can be run just by executing the Init method to perform any initialisation required, then executing RunTest until TestCompleted is TRUE. We can then inspect the TestFailed output to determine the result of the test.

However, what we really need is a collection of unit tests that can be executed automatically on an online change to the code. For this, we need a Test Suite

Test Suite

An example of a test suite is shown below:

A test suite is function block that holds a collection of unit tests and a test runner. The test runner instance is passed into the declaration of each test. This is required so that the unit test instance can be added to a list of tests maintained by the test runner.

The test runner is executed in the main body of the Test Suite. This executes each unit test sequentially when an online change to the code is detected. The name of the test suite is passed to the test runner so that messages posted to the error window of Visual Studio can be identified accordingly. An optional boolean parameter ‘Verbose’ can also be passed in. If true, then test results for each test are posted to the message window as each test completes.

Results with Verbose TRUE:

Results with Verbose FALSE:

Portability

So now we have a function block defined in a library that contains unit tests and a means of running them. This means we can run the tests from the client project by declaring an instance of the test suite and executing it.

Even better is that if the library is extended to include new functionality, and unit tests are added to the test suite to test this new functionality, then these tests will propagate to the tests in the client project when the library reference is updated.

Assertions

Assertion functions are available to test the correct operation of the code under test. If the assertion fails, it posts an error message to the Error List and sets its Failed output. This can be used to set the TestFailed output of the RunTest method.

The following assertions are defined in the library:

AssertTrue – asserts that the boolean input parameter (Condition) is true

AssertEquals – asserts that two input parameters of type ANY (Expected and Actual) are identical in type, size and content.

AssertNearlyEquals – asserts that the absolute difference between two input parameters of type LREAL (Expected and Actual) is less than the value of a third input parameter (Delta). This is to make the test insensitive to rounding errors in floating point calculations.

Known Issues

If a unit test is added to a test suite, the code must be downloaded to the runtime – no online change is possible. This appears to be due to a bug in the way nested function blocks are initialised. If an online change is attempted, an error is raised :
Error Function ‘FB_init’ requires exactly ‘3’ inputs.

OO Architecture in TwinCAT3

Last updated 09/06/2020

This article is intended as guide to the adoption of object-oriented (OO) techniques in PLC programs written using structured text using  TwinCAT3.

An example project showing some of these ideas in practice can be downloaded here

Background

TwinCAT is a programming tool provided by Beckhoff for programming their range of PC based controllers. TwinCAT3 is the latest version and supports 3 programming styles:

  1. Procedural
  2. A mix of procedural and object-oriented programming
  3. Purely object oriented programming

Many PLC programs are difficult to understand and difficult to modify due to a high degree of coupling between modules and to informal communication paths between modules. This can be a result of the conflicting requirements of:

  • The high degree of interaction between different parts of a machine
  • The need to minimise coupling and dependancies between different parts of the program.

An object oriented approach offers a way of reconciling these requirements by allowing POUs to encapsulate behaviour and to exchange messages and in a consistent and well defined way.

However, when I first tried writing a program in a fully object-oriented style, I found it was not straight-forward. Creating the classes was easy – getting them to collaborate was hard. I think this is a natural consequence of the properties of objects that make them so useful – encapsulation.

This article will attempt to explain some of the techniques I have found minimise these difficulties.

Procedural Code

A procedural program will generally comprise the following

  • Programs – top level POUs that access global variables and call function blocks
  • Function blocks – POUs that accept inputs and drive outputs that are assigned when the function block is called. Function blocks may in turn contain function blocks and functions

The arrangement of function block calls and the messaging between function blocks is fixed by the program which has the advantage of being simple but the disadvantage of being inflexible. It works but we can do better.

Procedural/OO Mix

A half-way house to moving to a fully OO architecture is to use a mix of procedural and OO styles. The major building blocks of the program are still implemented as function blocks called by programs, but the smaller function blocks that provide services are implemented as classes. This can improve the readability of code by providing a fluent interface (Here is an interesting post on fluent interfaces in TwinCAT3 written by Gerhard Barteling).

For example, the standard rising edge trigger function block can be used as follows:

We can make this slightly easier to read by wrapping the R_TRIG in a class:

The TRisingEdge POU is a function block with a single method (Test):

This can be a useful way to introduce classes of gradually increasing complexity to a procedural program to get a feel for how objects can be used to present an intuitive interface to the client code, and hide the implementation details.

Fully OO Code

A fully OO program is one where all function blocks are implemented as classes – i.e they use properties and methods to provide interfaces to other objects. Classes are instantiated as objects that interact with each other to provide the required functionality.

So what are the guidelines that we can use to identify the classes we need? The best way that I have found is to think of a class as something with a single responsibility (or a small collection of closely related responsibilities). This quite often turns out to be the noun-based classes that are the traditional candidates for classes but it can also accommodate more abstract responsibilities such as parts tracking or alarm handling.

Provided these classes are kept small, implementing them can be fairly straightforward, as all the required information is held by the class in a well defined context. The class defines the operations required to handle it’s responsibility.

The class may carry out this operation during its cyclic execution, or it may allow other objects to invoke the operation as a service via a public interface.

The problem to solve is how to arrange the instances of these classes so that they can safely invoke each other’s services.

The Problems Of Ownership

The simplest case is where the client object (object requesting an operation be performed) owns a service object (object implementing the operation to be performed). In the rising edge trigger example shown earlier, the client object (RisingEdgeExample) asks the service object (StartPbRisingEdge : TRisingEdge) to use its Test methods to detect a rising edge and return true when one is detected. This works a treat, so why not use this all the time?

  • It only works if the client object is the sole user of the service object. This is generally the case for small classes such as timers and edge triggers, but not for larger classes that other clients may want to access.
  • If the service object is complex, it will need to hold its own service objects. This can result in a deep heirachy of objects, which should be avoided as it is difficult to change and difficult to navigate.

A better approach is to avoid nesting complex classes. Instead, assume all complex objects may need to be accessed by other objects, and instantiate them in a global object list. Provided the interface is carefully designed to ensure it is safe to use and cannot lead to an invalid internal state, there should be no problem making such an object globally accessible.

Referencing Shared Objects

If we have a global object list, any object can reference an other object by using its fully qualified name. Whilst this is reasonable for small programs, it should be avoided because:

  • A dependancy is created between the classes which makes the program harder to understand and change.
  • An instance of an object may not know which service object it is interested in where multiple instances of the service exist (such as where there are repeated machine elements, for example)

The alternative is to pass in a refence to the service object when the client object is called, either as a reference (using VAR_IN_OUT), or as an interface pointer (using VAR_INPUT) – also known as dependancy inversion.

Dependancy Inversion

Here, the VAR_INPUT parameter type is an interface. The service object must implement this interface so it can be implicitly converted when it is passed to the client during the call, and the interface must define the properties and methods to be used by the client so the client can use this interface pointer as if it were the service object itself

Thus, only the parts of the server object that are interesting to the client need be passed in. A benefit of this is that to test the client, a dummy class that implements the interface (a test double) can be passed in to remove the dependancy during testing.

Invoking Asynchronous Operations

Where a client wants to invoke an asynchronous operation on a service object (i.e one taking multiple scans with the possibility of the operation failing), a single method is not sufficient – one method is required to invoke the operation and another to query the current state of the operation. A third may be required to abort the operation under certain conditions (E-Stop, for example). In order to tie these methods together, an abstraction can be used that follows the command pattern.

Command Pattern

This pattern uses a command object to represent the the invocation of an asynchronous command. The command object is an instance variable of the service object. It implements an interface that is made public via a property of the service object that allows a client to start and abort the operation, and query the state of the operation.

The command interface has the following methods and properties:

  • Start() – A method to start the execution of the command
  • Abort() – A methods to abort the execution of the command
  • Clear() – A method to clear the results of the command
  • State – indicates the current state of the command.
  • Done – indicates that the last execution completed succesfully
  • Error – indicates that the last execution failed to complete successfully
  • ErrorId – error code relating to the reason the last execution failed to complete successfully

The possible states are:

  • Ready – not currently executing
  • Starting – a start command has been accepted, but execution has not begun
  • Busy – currently executing
  • Aborting – an abort command has been recieved so the command is terminating to leave the service object in a ready state

The command object implements the following methods that are used by the service object to update the State property

  • SetBusy()
  • SetDone()
  • SetError(ErrorId)
  • SetAborted()

Command Usage

The service object exposes the command interface as a property (Command). The client object holds a reference to this command interface (CommandInterface)

  1. The client object invokes the operation by calling CommandInterface.Start(), which returns true if the start was accepted (i.e the state was ready)
  2. The service object detects that the command state is starting and initiates the operation, calling Command.SetBusy() to set the state to busy
  3. If the operation completes sucessfully, the service object calls Command.SetDone(), which sets Done property and sets the state back to ready
  4. If the operation fails, the service object calls Command.SetError(ErrorId), which sets the Error and ErrorId properties, and sets the state back to ready
  5. The client object monitors the command interface properties to determine the result of the operation, and act accordingly.

Command Benefits

The command encapsulates the interaction between the client object and the service object, reducing duplicated code by defining the ineraction once.

It creates a standard pattern across the program for how clients invoke operations on services, so a sequence of operations can be implemented by a state machine that sequentially calls Start() and checks Done or Error on a series of command interfaces

It supports both an ‘execute with feedback’ and a ‘fire and forget’ approach to triggering an operation. The former is useful in sequences where progress to the next step requires confirmation that the operation has completed. The latter is useful in response to manual commands where it is reasonable to ignore the command cannot be executed.

The client and the service become decoupled – the client no longer cares about the type of object that the service is, only that it has an interface that controls the invocation of the operation.

A description of an implementation of this pattern can be found here.

Further Decoupling – Domain Events

Domain events are an implementation of the observer pattern that allows an object to publish the fact that something has occurred to other interested subscriber objects (whose types are unknown to the publisher).

A domain event holds a list of objects that are to be notified when the event is raised. It has the following methods for updating this list:

  • ClearEventHandlers()
  • AddEventHandler()

The domain event has methods that allows the client to add event data and to raise the event:

  • EventArgs.AddInt()
  • EventArjs.AddString()
  • EventArgs.AddLreal() etc
  • Raise()

The raise method causes a method in each registered subscriber to be executed, with the EventArgs passed in for evaluation by this event handler method.

The main benefits are:

  • The removal of dependancies between the publisher and the subscriber.
  • The synchronous nature of the update – all subscribes are updated at the same time.
  • The ability to have many objects react to a single event in a one to many relationship
  • The ability to have many objects raise a single event in a many to one relationship
  • The cognitive decoupling that it allows between thinking about the event being raised and the event being handled.

A description of an implementation of this pattern can be found here.

Summary

The resulting architecture is based on a global list of small objects that interact using dependancy inversion, commands and events. Where the objects’ relationship is one-to-one, dependancy inversion or the command pattern are the best ways for them to communicate as this is easier to follow and understand than an event mechanism. Events are appropriate for one to many relationships (such as manual control events that must be handled by multiple objects) and many to one relationships (such as where many instances of the same class can raise the an event that must be handled by a single object.