The Streamed Dataflow Model of Computation


CAPH is based upon a strict dataflow model of computation : Applications are described as networks of computational units, called actors, exchanging streams of tokens through unidirection, buffered channels. Data to be processed is simply « pushed » in the input ports of the network and results are collected at output ports. Execution occurs as tokens litteraly « flow » through channels, into and out of actors, the behavior of each actor being specified as a set of firing rules.


This model of computation is illustrated on a very simple example below, involving four actors, operating on unstructured streams of tokens carrying integer values. Here the firing rules for each actor are very simple : an actor becomes active whenever tokens are available on all of its input channels and token(s) can be written on its output channel(s). In CAPH streams can also be structured by adding  control tokens, allowing complex firing rules to be expressed.

Parallelism


With this model, interaction between actors is strictly limited to token exchange through channels, so that the behavior of each actor can be completely described in terms of actions performed on its inputs to produce outputs (no side effect, strictly local control). As a result,  all parallel operations may be executed concurrently, without the risk of side effects. This allows a full exploitation of intrinsic data and control-level parallelism. Moreover, by allowing several tokens to reside simultaneously on a channel (by implementing channels using FIFOs typically), execution pipelining may be increased, each actor being able to produce one output token before the previous one has actually been consumed.