Timing Forks

Timing forks are the way timing constraints are specified in the ACT tool flow. Superficially, timing forks resemble a number of different ways timing constraints are specified in other approaches, but there are a number of subtle differences that we detail below. We begin with some of the key theoretical concepts that underpin the use of timing forks in the ACT design flow.

Theory

In the theoretical results about timing forks1), a timing fork is specified between three nodes in a computation. A node is a particular local state of a signal; the local state includes a signal and its value, the value of all the inputs to the gate for the signal, and an occurrence index (since it may be, for example, the fifth time this particular signal and input combination occurred in the execution). This is the information necessary to refer to a particular point in the execution of a circuit with respect to the local information available at a gate.

A timing fork, then, relates three nodes: a root node R, and two other nodes that we refer to as the fast node F and the slow node S for convenience, where there is a sequence of signal transitions that go from R to F and R to S (see the referenced paper for details).

A particular node may or may not appear in a computation due to data-dependent behavior. So a timing fork only makes sense when all three nodes referred to in the fork in fact appear in a computation.

When a circuit is operating, these nodes occur at certain times that are determined by gate delays. We know that the delay from R to F has some upper bound ub (from gate delays), and the delay from R to S has a lower bound lb (again from gate delays). So we can bound the time difference between F and S using lb-ub, because they share a common timing reference point R. In other words, we can write time(S) - time (F) >= (lb-ub), or equivalently time(S) >= time(F) + (lb-ub).

What this means is that we can use the common reference point R to reason about the relative ordering between F and S due to timing. From a practical standpoint, this means that there are circuit paths from the gate corresponding to R to the gates corresponding to F and S that can be used as the basis of the ordering.

This much seems quite straightforward.

The key technical result about timing forks says that if two nodes are ordered in all possible computations and without errors occurring, then a sequence of such timing forks must exist, and the reason the nodes are ordered is through a combination of inequalities of the form shown above that together ensure that the nodes are ordered. (This is called a timing zig-zag.) In other words, the intuitive notion of a common timing reference point that is the basis for ordering signal transitions is in fact a requirement.

Timing forks in ACT

ACT provides syntax to refer to these nodes in a limited fashion. Instead of being able to specify nodes using all the information needed, we simply refer to nodes at the point when a signal changes. Furthermore, we always consider all possible occurrences of the signal change, rather than a specific one (e.g. the seventh occurrence of a signal change). As common timing forks can relate difference occurrence indices, we provide syntax to refer to common scenarios in circuit design.

There are many closely related but slightly different notions that are used in the literature to describe timing constraints in a variety of circuit contexts. Here we try and provide a simplified view of the differences and similarities between some common notions and timing forks.

Setup and hold time

In clocked design, a setup and hold time constraint for a state-holding element is used to ensure correct operation. We use positive edge-triggered flip-flops as the running example.

The setup time constraint states that the data input to the flip-flop is stable for a certain time (the setup time) before the clock pin makes a zero-to-one transition. To see why this is also a timing fork constraint, we make the following observations:

  • The only way to ensure that the setup time constraint is met is to know something about when the input to the flip-flop can change with respect to the clock pin of the same flip-flop.
  • In synchronous logic, the built-in assumption is that this is possible because the input to the flip-flop is in the same clock domain as the flip-flop itself—in other words, comes from another set of flip-flops that share a common clock.
  • The timing fork can be determined as follows. The common timing reference point is the common clock tree root that feeds the source flip-flops and the target flip-flop. The path from the clock root through the source flip-flop clock pins out through their data pins and combinational logic to the target flip-flop has a delay constraint relative to the path from the clock root to the clock pin of the target flip-flop.

A similar argument can be made for hold time.

Point-of-divergence constraint

The delay from a common timing reference point via all possible circuit paths to two different signal transitions is sometimes referred to as a point-of-divergence constraint. Checking all possible paths through the circuit of this form is one possible way a timing fork constraint can be checked.

However, note that an asynchronous circuit has cycles in the timing graph, so technically a point-of-divergence check would require checking an infinite number of paths. So a point-of-divergence timing constraint makes much more sense when the timing graph is acyclic (as is the case in standard clocked circuits). Sometimes point-of-divergence constraints are used to check timing in an asynchronous circuit by explicitly cutting the cycles in the timing graph.

Timing forks can be applied even when there are cycles in the timing graph because the nodes in the fork refer to specific occurrence indices of signal transitions. Hence, even though the timing graph is cyclic, the paths to be checked for a timing fork are bounded.

Relative timing

Relative timing is a methodology for designing asynchronous circuits where you can assume that two signal transitions are ordered due to timing. When you assume that a+ occurs before b- for example, there is an implicit assumption that a+ and b- occur, and the methodology is most commonly used to reason about control logic where this is a common scenario.

Timing fork theory states that if the two transitions are guaranteed to be ordered, then a timing fork/zig-zag that is the basis for the ordering must exist. In this sense, timing forks can be used to describe a relative timing constraint. However, a timing fork doesn't require that all the transitions occur. It can also refer to only a specific instance of a signal transition if that is all that is required; for example, we could say that the ith occurence of a+ occurs before the (i+1)th occurrence of b- rather than the ith occurrence of b- (or some other such relation).

1)
Rajit Manohar and Yoram Moses. Timed Signaling Processes. IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC), July 2023.