This is an old revision of the document!
Timing Forks
Timing forks are the way timing constraints are specified in the ACT tool flow. Superficially, timing forks resemble a number of different ways timing constraints are specified in other approaches, but there are a number of subtle differences that we detail below. We begin with some of the key theoretical concepts that underpin the use of timing forks in the ACT design flow.
Theory
In the theoretical results about timing forks1), a timing fork is specified between three nodes in a computation. A node is a particular local state of a signal; the local state includes a signal and its value, the value of all the inputs to the gate for the signal, and an occurrence index (since it may be, for example, the fifth time this particular signal and input combination occurred in the execution). This is the information necessary to refer to a particular point in the execution of a circuit with respect to the local information available at a gate.
A timing fork, then, relates three nodes: a root node R
, and two other nodes that we refer to as the fast node F
and the slow node S
for convenience, where there is a sequence of signal transitions that go from R
to F
and R
to S
(see the referenced paper for details).
When a circuit is operating, these nodes occur at certain times that are determined by gate delays. We know that the delay from R
to F
has some upper bound ub
(from gate delays), and the delay from R
to S
has a lower bound lb
(again from gate delays). So we can bound the time difference between F
and S
using lb-ub
, because they share a common timing reference point R
. In other words, we can write time(S) - time (F) >= (lb-ub)
, or equivalently time(S) >= time(F) + (lb-ub)
.
What this means is that we can use the common reference point R
to reason about the relative ordering between F
and S
due to timing. From a practical standpoint, this means that there are circuit paths from the gate corresponding to R
to the gates corresponding to F
and S
that can be used as the basis of the ordering.
This much seems quite straightforward.
The key technical result about timing forks says that if two nodes are ordered in all possible computations and without errors occurring, then a sequence of such timing forks must exist, and the reason the nodes are ordered is through a combination of inequalities of the form shown above that together ensure that the nodes are ordered. (This is called a timing zig-zag.)