This is an old revision of the document!


Timing Forks

Timing forks are the way timing constraints are specified in the ACT tool flow. Superficially, timing forks resemble a number of different ways timing constraints are specified in other approaches, but there are a number of subtle differences that we detail below. We begin with some of the key theoretical concepts that underpin the use of timing forks in the ACT design flow.

Theory

In the theoretical results about timing forks1), a timing fork is specified between three nodes in a computation. A node is a particular local state of a signal; the local state includes a signal and its value, the value of all the inputs to the gate for the signal, and an occurrence index (since it may be, for example, the fifth time this particular signal and input combination occurred in the execution). This is the information necessary to refer to a particular point in the execution of a circuit with respect to the local information available at a gate.

A timing fork, then, relates three nodes: a root node R, and two other nodes that we refer to as the fast node F and the slow node S for convenience, where there is a sequence of signal transitions that go from R to F and R to S (see the referenced paper for details).

A particular node may or may not appear in a computation due to data-dependent behavior. So a timing fork only makes sense when all three nodes referred to in the fork in fact appear in a computation.

When a circuit is operating, these nodes occur at certain times that are determined by gate delays. We know that the delay from R to F has some upper bound ub (from gate delays), and the delay from R to S has a lower bound lb (again from gate delays). So we can bound the time difference between F and S using lb-ub, because they share a common timing reference point R. In other words, we can write time(S) - time (F) >= (lb-ub), or equivalently time(S) >= time(F) + (lb-ub).

What this means is that we can use the common reference point R to reason about the relative ordering between F and S due to timing. From a practical standpoint, this means that there are circuit paths from the gate corresponding to R to the gates corresponding to F and S that can be used as the basis of the ordering.

This much seems quite straightforward.

The key technical result about timing forks says that if two nodes are ordered in all possible computations and without errors occurring, then a sequence of such timing forks must exist, and the reason the nodes are ordered is through a combination of inequalities of the form shown above that together ensure that the nodes are ordered. (This is called a timing zig-zag.) In other words, the intuitive notion of a common timing reference point that is the basis for ordering signal transitions is in fact a requirement.

Timing forks in ACT

ACT provides syntax to refer to these nodes in a limited fashion. Instead of being able to specify nodes using all the information needed, we simply refer to nodes at the point when a signal changes. Furthermore, we always consider all possible occurrences of the signal change, rather than a specific one (e.g. the seventh occurrence of a signal change). As common timing forks can relate difference occurrence indices, we provide syntax to refer to common scenarios in circuit design.

There are many closely related but slightly different notions that are used in the literature to describe timing constraints in a variety of circuit contexts.

Setup and hold time

Point-of-divergence constraint

Relative timing

1)
Rajit Manohar and Yoram Moses. Timed Signaling Processes. IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC), July 2023.